gpt4 book ai didi

python - 在 Python 中使用 Selenium 抓取无限滚动网站

转载 作者:太空宇宙 更新时间:2023-11-04 05:53:45 27 4
gpt4 key购买 nike

我想删除这个网站有滚动条的内容: http://stocktwits.com/symbol/AAPL?q=AAPL

我在 Stactoverflow 中找到了一个类似问题的答案: scrape websites with infinite scrolling

这是从那里复制的代码:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
import sys

import unittest, time, re

class Sel(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.implicitly_wait(30)
self.base_url = "https://twitter.com"
self.verificationErrors = []
self.accept_next_alert = True
def test_sel(self):
driver = self.driver
delay = 3
driver.get(self.base_url + "/search?q=stckoverflow&src=typd")
driver.find_element_by_link_text("All").click()
for i in range(1,100):
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(4)
html_source = driver.page_source
data = html_source.encode('utf-8')


if __name__ == "__main__":
unittest.main()

现在我想废弃 Stocktwits 网站而不是 twitter(链接在上面)。

我把上面的代码修改成这样:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
import sys

import unittest, time, re

class Sel(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.implicitly_wait(30)
self.base_url = "http://stocktwits.com/symbol/AAPL?q=AAPL"
self.verificationErrors = []
self.accept_next_alert = True
def test_sel(self):
driver = self.driver
delay = 3
driver.get(self.base_url)
driver.find_element_by_link_text("All").click()
for i in range(1,100):
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(4)
html_source = driver.page_source
data = html_source.encode('utf-8')


if __name__ == "__main__":
unittest.main()

但是当我运行代码时出现这个错误:

NoSuchElementException: Message: Unable to locate element: {"method":"link text","selector":"All"}

我感谢任何帮助找出问题的帮助。

最佳答案

看起来问题出在这一行:

driver.find_element_by_link_text("All").click()

您期待一个带有链接文本 "All" 的元素,但不存在。

关于python - 在 Python 中使用 Selenium 抓取无限滚动网站,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28871115/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com