gpt4 book ai didi

python - 尝试用 selenium/beautiful soup 提取动态表(url 不变)

转载 作者:行者123 更新时间:2023-11-28 18:10:24 30 4
gpt4 key购买 nike

我一直在尝试提取下表,这是我通过使用 chromedriver 自动输入然后使用反验证码服务获得的,我看到了一个示例,其中有人在生成表后使用了漂亮的汤。

这是一个多页表格,但我只是想在尝试弄清楚如何点击其他页面之前获得第一页,我不确定我是否可以使用漂亮的汤,因为当我尝试代码时下面我得到第一行“没有要显示的属性”。如果没有搜索结果并且有搜索结果,那将是这样。

我无法在此处嵌入图片,因为我的排名不够高(抱歉,我是新手而且很烦人,我在发布几个小时之前试图解决这个问题),但是如果您访问该网站并且搜索“Al”或任何输入,您可以看到表格 html https://claimittexas.org/app/claim-search

这是我的代码-

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
from python_anticaptcha import AnticaptchaClient, NoCaptchaTaskProxylessTask
import re
import pandas as pd
import os
import time
import requests

parsed_table_date = []
url = "https://claimittexas.org/app/claim-search"
driver = webdriver.Chrome()
driver.implicitly_wait(15)
driver.get(url)
lastNameField = driver.find_element_by_xpath('//input[@id="lastName"]')
lastNameField.send_keys('Al')
api_key = #MY API key
site_key = '6LeQLyEUAAAAAKTwLC-xVC0wGDFIqPg1q3Ofam5M' # grab from site
client = AnticaptchaClient(api_key)
task = NoCaptchaTaskProxylessTask(url, site_key)
job = client.createTask(task)
print("Waiting to solution by Anticaptcha workers")
job.join()
# Receive response
response = job.get_solution_response()
print("Received solution", response)
# Inject response in webpage
driver.execute_script('document.getElementById("g-recaptcha-response").innerHTML = "%s"' % response)
# Wait a moment to execute the script (just in case).
time.sleep(1)
# Press submit button
driver.find_element_by_xpath('//button[@type="submit" and @class="btn-std"]').click()
time.sleep(1)
html = driver.page_source
soup = BeautifulSoup(html, "lxml")
table = soup.find("table", { "class" : "claim-property-list" })
table_body = table.find('tbody')
#rows = table_body.find_all('tr')
for row in table_body.findAll('tr'):
print(row)
for col in row.findAll('td'):
print(col.text.strip())

最佳答案

您得到的是 No properties to display. 因为:

img

相反,您必须从元素的第二个索引开始迭代:

//tbody/tr[2]/td[2]
//tbody/tr[2]/td[3]
//tbody/tr[2]/td[4]
...
//tbody/tr[3]/td[2]
//tbody/tr[3]/td[3]
//tbody/tr[3]/td[4]
...

因此您必须像这样从迭代中指定起始索引:

rows = driver.find_elements_by_xpath("//tbody/tr")
for row in rows[1:]:
print(row.text) # prints the whole row
for col in row.find_elements_by_xpath('td')[1:]:
print(col.text.strip())

上面的代码输出如下:

CLAIM # this is button value
37769557 1ST TEXAS LANDSCAPIN 6522 JASMINE ARBOR LANE HOUSTON TX 77088 MOTEL 6 OPERATING LP ACCOUNTS PAYABLE $351.00 2010
37769557
1ST TEXAS LANDSCAPIN
6522 JASMINE ARBOR LANE
HOUSTON
TX
77088
MOTEL 6 OPERATING LP
ACCOUNTS PAYABLE
$351.00
2010
CLAIM # this is button value
38255919 24X7 APARTMENT FIND OF TEXAS 1818 MOSTON DR SPRING TX 77386 NOT DISCLOSED NOT DISCLOSED $88.76 2017
38255919
24X7 APARTMENT FIND OF TEXAS
1818 MOSTON DR
SPRING
...

关于python - 尝试用 selenium/beautiful soup 提取动态表(url 不变),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51072792/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com