gpt4 book ai didi

python - 如何从子类别中的所有页面获取所有产品(python,亚马逊)

转载 作者:行者123 更新时间:2023-11-28 22:21:11 25 4
gpt4 key购买 nike

如何获取子类别中所有页面的所有产品?我附上了程序。现在我的程序只从第一页开始。我想从所有 +400 页中获取该子类别的所有产品,以便转到下一页提取所有产品,然后转到下一页等。我将不胜感激任何帮助。

# selenium imports
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import random

PROXY ="88.157.149.250:8080";


chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % PROXY)
# //a[starts-with(@href, 'https://www.amazon.com/')]/@href
LINKS_XPATH = '//*[contains(@id,"result")]/div/div[3]/div[1]/a'
browser = webdriver.Chrome(chrome_options=chrome_options)
browser.get(
'https://www.amazon.com/s/ref=lp_11444071011_nr_p_8_1/132-3636705-4291947?rh=n%3A3375251%2Cn%3A%213375301%2Cn%3A10971181011%2Cn%3A11444071011%2Cp_8%3A2229059011')
links = browser.find_elements_by_xpath(LINKS_XPATH)
for link in links:
href = link.get_attribute('href')
print(href)

最佳答案

因为你想获取大量数据,最好通过直接 HTTP 请求获取它,而不是使用 Selenium 导航到每个页面......

尝试遍历所有页面并抓取所需数据,如下所示

import requests
from lxml import html

page_counter = 1
links = []

while True:
headers = {"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0"}
url = "https://www.amazon.com/s/ref=sr_pg_{0}?rh=n%3A3375251%2Cn%3A!3375301%2Cn%3A10971181011%2Cn%3A11444071011%2Cp_8%3A2229059011&page={0}&ie=UTF8&qid=1517398836".format(page_counter)
response = requests.get(url, headers=headers)
if response.status_code == 200:
source = html.fromstring(response.content)
links.extend(source.xpath('//*[contains(@id,"result")]/div/div[3]/div[1]/a/@href'))
page_counter += 1
else:
break

print(links)

附言检查this ticket能够将代理与 requests 库一起使用

关于python - 如何从子类别中的所有页面获取所有产品(python,亚马逊),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48541117/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com