gpt4 book ai didi

python - 简单的网页更改或按钮删除和抓取的数据是无用的

转载 作者:太空狗 更新时间:2023-10-29 21:06:32 24 4
gpt4 key购买 nike

我遇到了很多页面,这些页面通过简单的按钮删除或什至只是页面中的一个小故障就可以解决问题。

这个问题似乎经常出现,但我不确定如何解决它。本质上,随着球队、赔率和任何东西都消失了,它得到了带有链接的 xpath:(//*[contains(@class, "sport-block") and .//div/div]//*[包含(@class, "purple-ar")])。正如它应该的那样,但不是团队和赔率造成无用的抓取。

我最初使用 CSS 选择器,但我想不出在 CSS 的限制下这怎么可能。

我追求的简单 xpath:

//*[contains(@class, "sport-block") and .//div/div]//*[contains(@class, "purple-ar")]

问题依然存在。

我对祖先和之前的东西不是很熟悉......但是像 xpath 之类的东西:

即://a/ancestor::div[contains(@class, 'xpath')]/preceding-sibling::div[contains(@class, 'xpath')]//a

到:

//a/ancestor::div[contains(@class, 'table-grid')]/preceding-sibling::span[contains(@class, 'sprite-icon arrow-icon arrow-right arrow-purple')]//a

可能会解决(假设我可以让它工作)。

                        <td class="top-subheader uppercase">
<span>
English Premier League Futures
</span>
</td>
</tr>
<tr>
<td class="content">
<div class="titles">
<span class="match-name">
<a href="/sports-betting/soccer/united-kingdom/english-premier-league-futures/outright-markets-20171226-616961-22079860">
Outright Markets
</a>
</span>
<span class="tv">
26/12

</span>

<span class="other-matches">
<a href="/sports-betting/soccer/united-kingdom/english-premier-league-futures/outright-markets-20171226-616961-22079860" class="purple-arrow">5 Markets
<span class="sprite-icon arrow-icon arrow-right arrow-purple"></span>
</a>
</span>

有什么办法可以解决这个问题吗?谢谢。

当前输出:

Steaua Bucharest    Link for below
Celtic Link for below
Napoli Link for below
Lyon Link for below

期望:

Steaua Bucharest    LINK FOR Steaua Bucharest
Celtic Link Celtic
Napoli Link for Napoli
Lyon Link for Lyon

enter image description here

有什么办法可以解决这个问题吗?或者甚至缩小方法?持续性问题。谢谢。

最佳答案

为了确保您的数据结构对于每个组都是完整的,我遍历它们并使用嵌套(或相对?我不确定这里的术语)XPath 来获取数据。可以通过在每个查询之前放置 . 来使用相对 XPath。

我也清理了一下:

  • 您抓取了一堆链接并使用它们遍历页面直到完成。我用 while 循环替换了它。
  • 我添加了大量的 try/except 以捕获尽可能多的数据。
  • 我在每个新页面上都添加了休眠以允许加载数据(时间可以根据您的网络连接手动调整)。

如果这能解决您的数据一致性问题,请告诉我。

import csv
import os
import time
from random import shuffle
from selenium import webdriver
from selenium.common.exceptions import TimeoutException, NoSuchElementException
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait as wait

driver = webdriver.Chrome()
driver.set_window_size(1024, 600)
driver.maximize_window()

driver.get('https://crownbet.com.au/sports-betting/soccer')

header = driver.find_element_by_tag_name('header')
driver.execute_script('arguments[0].hidden="true";', header)
header1 = driver.find_element_by_css_selector('div.row.no-margin.nav.sticky-top-nav')
driver.execute_script('arguments[0].hidden="true";', header1)

# XPaths for the data
groups = '//div[@id="sports-matches"]/div[@class="container-fluid"]'
xp_match_link = './/span[@class="match-name"]/a'
xp_bp1 = './/div[@data-id="1"]//span[@class="bet-party"]'
xp_ba1 = './/div[@data-id="3"]//span[@class="bet-amount"]'
xp_bp3 = './/div[@data-id="3"]//span[@class="bet-party"]'
xp_ba3 = './/div[@data-id="3"]//span[@class="bet-amount"]'

while True:
try:
# wait for the data to populate the tables
wait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, (xp_bp1))))
time.sleep(2)

data = []
for elem in driver.find_elements_by_xpath(groups):
try:
match_link = elem.find_element_by_xpath(xp_match_link)\
.get_attribute('href')
except:
match_link = None

try:
bp1 = elem.find_element_by_xpath(xp_bp1).text
except:
bp1 = None

try:
ba1 = elem.find_element_by_xpath(xp_ba1).text
except:
ba1 = None

try:
bp3 = elem.find_element_by_xpath(xp_bp3).text
except:
bp3 = None

try:
ba3 = elem.find_element_by_xpath(xp_ba3).text
except:
ba3 = None

data.append([match_link, bp1, ba1, bp3, ba3])
print(data)

element = driver.find_element_by_xpath('//span[text()="Next Page"]')
driver.execute_script("arguments[0].scrollIntoView();", element)
wait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, '//span[text()="Next Page"]')))
element.click()

with open('test.csv', 'a', newline='', encoding="utf-8") as outfile:
writer = csv.writer(outfile)
for row in data:
writer.writerow(row)

except TimeoutException as ex:
pass
except NoSuchElementException as ex:
print(ex)
break

关于python - 简单的网页更改或按钮删除和抓取的数据是无用的,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47922167/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com