gpt4 book ai didi

python - Scrapy - 达到最大重定向时的产量 URL [301]

转载 作者:行者123 更新时间:2023-12-04 10:26:14 26 4
gpt4 key购买 nike

代码

# -*- coding: utf-8 -*-
import scrapy
import pandas as pd
from ..items import Homedepotv2Item
from scrapy.http import HtmlResponse


class HomedepotspiderSpider(scrapy.Spider):
name = 'homeDepotSpider'
#allowed_domains = ['homedepot.com']

start_urls = ['https://www.homedepot.com/p/305031636', 'https://www.homedepot.com/p/311456581']
handle_httpstatus_list = [301]

def parseHomeDepot(self, response):

#get top level item
items = response.css('.pip-container-fluid')
for product in items:
item = Homedepotv2Item()

productSKU = product.css('.modelNo::text').getall() #getSKU

productURL = response.request.url #Get URL


item['productSKU'] = productSKU
item['productURL'] = productURL

yield item

终端消息

没有 handle_httpstatus_list = [301]
2020-03-12 12:24:58 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.homedepot.com/p/ZLINE-Kitchen-and-Bath-ZLINE-30-in-Wall-Mount-Range-Hood-in-DuraSnow-%C3%91-Stainless-Steel-8687S-30-8687S-30/305031636> from <GET https://www.homedepot.com/p/ZLINE-Kitchen-and-Bath-ZLINE-30-in-Wall-Mount-Range-Hood-in-DuraSnow-%C3%91-Stainless-Steel-8687S-30-8687S-30/305031636>
2020-03-12 12:24:58 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.homedepot.com/p/ZLINE-Kitchen-and-Bath-ZLINE-30-in-Wooden-Wall-Mount-Range-Hood-in-Walnut-Includes-Remote-Motor-KBRR-RS-30/311456581>
{'productName': ['ZLINE 30 in. Wooden Wall Mount Range Hood in Walnut - '
'Includes Remote Motor'],
'productOMS': '311456581',
'productPrice': [' 979.95'],
'productSKU': ['KBRR-RS-30'],
'productURL': 'https://www.homedepot.com/p/ZLINE-Kitchen-and-Bath-ZLINE-30-in-Wooden-Wall-Mount-Range-Hood-in-Walnut-Includes-Remote-Motor-KBRR-RS-30/311456581'}

2020-03-12 12:25:01 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.homedepot.com/p/ZLINE-Kitchen-and-Bath-ZLINE-30-in-Wall-Mount-Range-Hood-in-DuraSnow-%C3%91-Stainless-Steel-8687S-30-8687S-30/305031636> from <GET https://www.homedepot.com/p/ZLINE-Kitchen-and-Bath-ZLINE-30-in-Wall-Mount-Range-Hood-in-DuraSnow-%C3%91-Stainless-Steel-8687S-30-8687S-30/305031636>
2020-03-12 12:25:01 [scrapy.downloadermiddlewares.redirect] DEBUG: Discarding <GET https://www.homedepot.com/p/ZLINE-Kitchen-and-Bath-ZLINE-30-in-Wall-Mount-Range-Hood-in-DuraSnow-%C3%91-Stainless-Steel-8687S-30-8687S-30/305031636>: max redirections reached
2020-03-12 12:25:01 [scrapy.core.engine] INFO: Closing spider (finished)
2020-03-12 12:25:01 [scrapy.extensions.feedexport] INFO: Stored csv feed (1 items) in: stdout:

使用 handle_httpstatus_list = [301]
2020-03-12 12:27:30 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-03-12 12:27:31 [scrapy.core.engine] DEBUG: Crawled (301) <GET https://www.homedepot.com/p/305031636> (referer: None)
2020-03-12 12:27:31 [scrapy.core.engine] DEBUG: Crawled (301) <GET https://www.homedepot.com/p/311456581> (referer: None)
2020-03-12 12:27:31 [scrapy.core.engine] INFO: Closing spider (finished)

这就是我用来导出到 excel scrapy crawl homeDepotSpider -t csv -o - > "pathname" 的内容

问题

所以我最初遇到的问题是我的蜘蛛忽略了“ https://www.homedepot.com/p/305031636 ”,因为链接会丢弃错误代码 301(重定向过多)。在研究了这个问题后,我发现 handle_httpstatus_list = [301] 应该已经解决了这个问题。但是,当我使用这段代码时,由于工作链接 (' https://www.homedepot.com/p/311456581 ') 重定向到不同的页面,它也会被忽略。

基本上我想要做的是从所有没有
ERR_TOO_MANY_REDIRECTS  

但从确实有错误代码的链接中获取 URL,然后将该数据导出到 excel。

编辑: 提出问题的更好方法: 由于我正在使用的所有 URL 都经过重定向,我该如何处理无法重定向的页面并获取这些 URL?

这也不是我的整个程序,我只包含了我认为程序必需的部分。

最佳答案

您可以使用以下内容手动处理 301 错误代码:

class HomedepotspiderSpider(scrapy.Spider):
name = 'my_spider'
retry_max_count = 3

start_urls = ['https://www.homedepot.com/p/305031636', 'https://www.homedepot.com/p/311456581']
handle_httpstatus_list = [301]

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.retries = {} # here are stored URL retries counts

def parse(self, response):
if response.status == 301:
retries = self.retries.setdefault(response.url, 0)
if retries < self.retry_max_count:
self.retries[response.url] += 1
yield response.request.replace(dont_filter=True)
else:
# ...
# DO SOMETHING TO TRACK ERR_TOO_MANY_REDIRECTS AND SAVE response.url
# ...

return

请注意,302 代码也可用于重定向。

关于python - Scrapy - 达到最大重定向时的产量 URL [301],我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60624455/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com