gpt4 book ai didi

python - 无法从多个页面获取所有链接(不更改 url)

转载 作者:太空宇宙 更新时间:2023-11-03 21:44:42 24 4
gpt4 key购买 nike

我想获取 10 页的所有链接,但无法单击第二页链接。来自网址https://10times.com/search?cx=partner-pub-8525015516580200%3Avtujn0s4zis&cof=FORid%3A10&ie=ISO-8859-1&q=%22Private+Equity%22&searchtype=All

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import bs4

from selenium import webdriver
import time

url = "https://10times.com/search?cx=partner-pub-8525015516580200%3Avtujn0s4zis&cof=FORid%3A10&ie=ISO-8859-1&q=%22Private+Equity%22&searchtype=All"
driver = webdriver.Chrome("C:\\Users\Ritesh\PycharmProjects\BS\drivers\chromedriver.exe")
driver.get(url)

def getnames(driver):
soup = bs4.BeautifulSoup(driver.page_source, 'lxml')
sink = soup.find("div", {"class": "gsc-results gsc-webResult"})
links = sink.find_all('a')
for link in links:
try:
print(link['href'])
except:
print("")

while True:
getnames(driver)
time.sleep(5)
nextpage = driver.find_element_by_link_text("2")
nextpage.click()
time.sleep(2)

请帮我解决这个问题。

最佳答案

您将需要使用selenium,因为页面中有动态元素。下面的代码将获取每个页面的所有链接:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time

url = "https://10times.com/search?cx=partner-pub-8525015516580200%3Avtujn0s4zis&cof=FORid%3A10&ie=ISO-8859-1&q=%22Private+Equity%22&searchtype=All"
driver = webdriver.Chrome("C:\\Users\Ritesh\PycharmProjects\BS\drivers\chromedriver.exe")
driver.get(url)

WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.XPATH, """//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div""")))


pages_links = driver.find_elements_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div/div""")

all_urls = []

for page_index in range(len(pages_links)):

WebDriverWait(driver, 20).until(
EC.presence_of_element_located((By.XPATH, """//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div""")))

pages_links = driver.find_elements_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div/div""")

page_link = pages_links[page_index]
print "getting links for page: ", page_link.text

page_link.click()

time.sleep(1)


#wait untill all links are loaded
WebDriverWait(driver, 20).until(
EC.presence_of_element_located((By.XPATH, """//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]""")))

first_link = driver.find_element_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[1]/div[1]/div[1]/div/a""")

results_links = driver.find_elements_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div/div[1]/div[1]/div/a""")

urls = [first_link.get_attribute("data-cturl")] + [l.get_attribute("data-cturl") for l in results_links]

all_urls = all_urls + urls


driver.quit()

您可以按原样使用此代码,也可以尝试与您已有的代码结合使用。

请注意,它不考虑广告链接,因为我认为您不需要它们,对吗?

请告诉我这是否有帮助。

关于python - 无法从多个页面获取所有链接(不更改 url),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52582671/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com