gpt4 book ai didi

python - 使用 Selenium 从网站上抓取值

转载 作者:行者123 更新时间:2023-12-04 20:48:48 25 4
gpt4 key购买 nike

我正在尝试从以下网站提取数据:

https://www.tipranks.com/stocks/sui/stock-analysis

我的目标是八边形中的值“6”:

enter image description here

我相信我的目标是正确的 xpath。

这是我的代码:

import sys
import os
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium import webdriver

os.environ['MOZ_HEADLESS'] = '1'
binary = FirefoxBinary('C:/Program Files/Mozilla Firefox/firefox.exe', log_file=sys.stdout)

browser = webdriver.PhantomJS(service_args=["--load-images=no", '--disk-cache=true'])

url = 'https://www.tipranks.com/stocks/sui/stock-analysis'
xpath = '/html/body/div[1]/div/div/div/div/main/div/div/article/div[2]/div/main/div[1]/div[2]/section[1]/div[1]/div[1]/div/svg/text/tspan'
browser.get(url)

element = browser.find_element_by_xpath(xpath)

print(element)

这是我得到的错误:

Traceback (most recent call last):
File "C:/Users/jaspa/PycharmProjects/ig-markets-api-python-library/trader/market_signal_IV_test.py", line 15, in <module>
element = browser.find_element_by_xpath(xpath)
File "C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 394, in find_element_by_xpath
return self.find_element(by=By.XPATH, value=xpath)
File "C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 978, in find_element
'value': value})['value']
File "C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: {"errorMessage":"Unable to find element with xpath '/html/body/div[1]/div/div/div/div/main/div/div/article/div[2]/div/main/div[1]/div[2]/section[1]/div[1]/div[1]/div/svg/text/tspan'","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Content-Length":"96","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:51786","User-Agent":"selenium/3.141.0 (python windows)"},"httpVersion":"1.1","method":"POST","post":"{\"using\": \"xpath\", \"value\": \"/h3/div/span\", \"sessionId\": \"d8e91c70-9139-11e9-a9c9-21561f67b079\"}","url":"/element","urlParsed":{"anchor":"","query":"","file":"element","directory":"/","path":"/element","relative":"/element","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/element","queryKey":{},"chunks":["element"]},"urlOriginal":"/session/d8e91c70-9139-11e9-a9c9-21561f67b079/element"}}
Screenshot: available via screen

我可以看到该问题是由于 xpath 不正确造成的,但无法找出原因。

我还应该指出,我认为使用 Selenium 是抓取该网站的最佳方法,并打算提取其他值并在多个页面上对不同股票重复这些查询。如果有人认为我用 BeutifulSoup、lmxl 等会更好,那么我很高兴听到建议!

提前致谢!

最佳答案

你甚至不需要声明所有路径。 Octagonal 位于 client-components-ValueChange-shape__Octagon 类的 div 中,因此搜索该 div。

x = browser.find_elements_by_css_selector("div[class='client-components-ValueChange-shape__Octagon']") ## Declare which class
for all in x:
print all.text

输出:

6

关于python - 使用 Selenium 从网站上抓取值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56638228/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com