gpt4 book ai didi

python - 使用 python -selenium 进行网页抓取

转载 作者:太空宇宙 更新时间:2023-11-04 02:32:56 26 4
gpt4 key购买 nike

我想从“新闻”类(代码中提到了 Url)中抓取所有 href 内容,我试过这段代码,但它不起作用...

代码:

from bs4 import BeautifulSoup
from selenium import webdriver

Base_url = "http://www.thehindubusinessline.com/stocks/abb-india-ltd/overview/"

driver = webdriver.Chrome()
driver.set_window_position(-10000,-10000)
driver.get(Base_url)

html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')

for div in soup.find_all('div', class_='news'):
a = div.findAll('a')
print(a['href'])

谢谢

最佳答案

您想要的内容位于框架内:

<iframe width="100%" frameborder="0" src="http://hindubusiness.cmlinks.com/Companydetails.aspx?&cocode=INE117A01022" id="compInfo" height="600px">...</iframe>

所以,首先你必须切换到那个框架。您可以通过添加以下行来执行此操作:

driver.switch_to.default_content()
driver.switch_to.frame('compInfo')

完整代码(使其 headless ):

from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options

Base_url = "http://www.thehindubusinessline.com/stocks/abb-india-ltd/overview/"

chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get(Base_url)
driver.switch_to.frame('compInfo')
soup = BeautifulSoup(driver.page_source, 'lxml')
for link in soup.select('.news a'):
print(link['href'])

输出:

/HomeFinancial.aspx?&cocode=INE117A01022&Cname=ABB-India-Ltd&srno=17040010444&opt=9
/HomeFinancial.aspx?&cocode=INE117A01022&Cname=ABB-India-Ltd&srno=17038039002&opt=9
/HomeFinancial.aspx?&cocode=INE117A01022&Cname=ABB-India-Ltd&srno=17019039003&opt=9
/HomeFinancial.aspx?&cocode=INE117A01022&Cname=ABB-India-Ltd&srno=17019038003&opt=9
/HomeFinancial.aspx?&cocode=INE117A01022&Cname=ABB-India-Ltd&srno=17019010085&opt=9

关于python - 使用 python -selenium 进行网页抓取,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48717962/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com