gpt4 book ai didi

python - 使用 beautifulsoup 从 div 中抓取页面内容

转载 作者:太空宇宙 更新时间:2023-11-03 15:52:50 25 4
gpt4 key购买 nike

我正在尝试从 http://www.indiainfoline.com/top-news 中抓取标题、摘要、日期和链接对于每个分区。使用 class' : 'row'

link = 'http://www.indiainfoline.com/top-news'
redditFile = urllib2.urlopen(link)
redditHtml = redditFile.read()
redditFile.close()
soup = BeautifulSoup(redditHtml, "lxml")
productDivs = soup.findAll('div', attrs={'class' : 'row'})
for div in productDivs:
result = {}
try:
import pdb
#pdb.set_trace()
heading = div.find('p', attrs={'class': 'heading fs20e robo_slab mb10'}).get_text()
title = heading.get_text()
article_link = "http://www.indiainfoline.com"+heading.find('a')['href']
summary = div.find('p')

但是没有获取任何组件。关于如何解决这个问题有什么建议吗?

最佳答案

看到html源码中有很多class=row,需要过滤掉实际存在行数据的section chunk。在您的情况下,id="search-list" 所有 16 个预期行都存在。因此首先提取部分然后行。由于 .select 返回数组,我们必须使用 [0] 来提取数据。获得行数据后,您需要迭代并提取标题、articl_url、摘要等。

from bs4 import BeautifulSoup
link = 'http://www.indiainfoline.com/top-news'
redditFile = urllib2.urlopen(link)
redditHtml = redditFile.read()
redditFile.close()
soup = BeautifulSoup(redditHtml, "lxml")
section = soup.select('#search-list')
rowdata = section[0].select('.row')

for row in rowdata[1:]:
heading = row.select('.heading.fs20e.robo_slab.mb10')[0].text
title = 'http://www.indiainfoline.com'+row.select('a')[0]['href']
summary = row.select('p')[0].text

输出:

PFC board to consider bonus issue; stock surges by 4%     
http://www.indiainfoline.com/article/news-top-story/pfc-pfc-board-to-consider-bonus-issue-stock-surges-by-4-117080300814_1.html
PFC board to consider bonus issue; stock surges by 4%
...
...

关于python - 使用 beautifulsoup 从 div 中抓取页面内容,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45481121/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com