作者热门文章
- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我成功编写了以下代码以获得the titles of a Wikipedia category.该类别包含超过 404 个标题。但我的输出文件只提供 200 个标题/页。如何扩展我的代码以获取该类别链接的所有标题 (next page)等等。
命令:python3 getCATpages.py
getCATpages.py代码;-
from bs4 import BeautifulSoup
import requests
import csv
#getting all the contents of a url
url = 'https://en.wikipedia.org/wiki/Category:Free software'
content = requests.get(url).content
soup = BeautifulSoup(content,'lxml')
#showing the category-pages Summary
catPageSummaryTag = soup.find(id='mw-pages')
catPageSummary = catPageSummaryTag.find('p')
print(catPageSummary.text)
#showing the category-pages only
catPageSummaryTag = soup.find(id='mw-pages')
tag = soup.find(id='mw-pages')
links = tag.findAll('a')
# giving serial numbers to the output print and limiting the print into three
counter = 1
for link in links[:3]:
print (''' '''+str(counter) + " " + link.text)
counter = counter + 1
#getting the category pages
catpages = soup.find(id='mw-pages')
whatlinksherelist = catpages.find_all('li')
things_to_write = []
for titles in whatlinksherelist:
things_to_write.append(titles.find('a').get('title'))
#writing the category pages as a output file
with open('001-catPages.csv', 'a') as csvfile:
writer = csv.writer(csvfile,delimiter="\n")
writer.writerow(things_to_write)
最佳答案
思路是跟随下一页,直到页面上没有“下一页”链接为止。我们将维护一个网络抓取 session ,同时发出多个请求,在列表中收集所需的链接标题:
from pprint import pprint
from urllib.parse import urljoin
from bs4 import BeautifulSoup
import requests
base_url = 'https://en.wikipedia.org/wiki/Category:Free software'
def get_next_link(soup):
return soup.find("a", text="next page")
def extract_links(soup):
return [a['title'] for a in soup.select("#mw-pages li a")]
with requests.Session() as session:
content = session.get(base_url).content
soup = BeautifulSoup(content, 'lxml')
links = extract_links(soup)
next_link = get_next_link(soup)
while next_link is not None: # while there is a Next Page link
url = urljoin(base_url, next_link['href'])
content = session.get(url).content
soup = BeautifulSoup(content, 'lxml')
links += extract_links(soup)
next_link = get_next_link(soup)
pprint(links)
打印:
['Free software',
'Open-source model',
'Outline of free software',
'Adoption of free and open-source software by public institutions',
...
'ZK Spreadsheet',
'Zulip',
'Portal:Free and open-source software']
省略不相关的CSV写入部分。
关于Pythonic beautifulSoup4 : How to get remaining titles from the next page link of a wikipedia category,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41391168/
我是一名优秀的程序员,十分优秀!