gpt4 book ai didi

Scrapy: USER_AGENT 和 ROBOTSTXT_OBEY 设置正确,但我仍然得到错误 403

转载 作者:行者123 更新时间:2023-12-04 03:03:09 26 4
gpt4 key购买 nike

您好,在此先感谢您提供的帮助或指导。这是我的爬虫:

import scrapy    
class RakutenSpider(scrapy.Spider):
name = "rak"
allowed_domains = ["rakuten.com"]
start_urls = ['https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore']
def parse(self, response):
for sel in response.xpath('//div[@class="page-bottom"]/div'):
yield {
'titles': sel.xpath("//div[@class='slider-prod-title']").extract_first(),
'prices': sel.xpath("//span[@class='price-bold']").extract_first(),
'images': sel.xpath("//div[@class='deal-img']/img").extract_first()
}

这是我的settings.py的一部分

USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'
CONCURRENT_REQUESTS = 1
DOWNLOAD_DELAY = 5
# Obey robots.txt rules
ROBOTSTXT_OBEY = 'False'

这是日志的一部分:

DEBUG: Crawled (403) <GET https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore> (referer: None)

我已经尝试了我在s/o中找到的几乎所有解决方案


日志文件:这是安装 Firefox 驱动程序后的新日志。现在我得到一个错误:下载错误 https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore>

2017-11-17 00:38:45 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
2017-11-17 00:38:45 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'deals.spiders', 'CONCURRENT_REQUESTS': 1, 'SPIDER_MODULES': ['deals.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36', 'TELNETCONSOLE_ENABLED': False, 'DOWNLOAD_DELAY': 5}
2017-11-17 00:38:45 [py.warnings] WARNING: :0: UserWarning: You do not have a working installation of the service_identity module: 'No module named cryptography.x509'. Please install it from <https://pypi.python.org/pypi/service_identity> and make sure all of its dependencies are satisfied. Without the service_identity module and a recent enough pyOpenSSL to support it, Twisted can perform only rudimentary TLS client hostname verification. Many valid certificate/hostname mappings may be rejected.

2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.corestats.CoreStats']
2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled downloader middlewares:
['deals.middlewares.JSMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-11-17 00:38:45 [scrapy.core.engine] INFO: Spider opened
2017-11-17 00:38:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-11-17 00:38:45 [scrapy.core.scraper] ERROR: Error downloading <GET https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore>
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/middleware.py", line 37, in process_request
response = yield method(request=request, spider=spider)
File "/home/seealldeals/tmp/scrapy/deals/deals/middlewares.py", line 63, in process_request
driver = webdriver.Firefox()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/webdriver.py", line 144, in __init__
self.service.start()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/common/service.py", line 74, in start
stdout=self.log_file, stderr=self.log_file)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 8] Exec format error
2017-11-17 00:38:45 [scrapy.core.engine] INFO: Closing spider (finished)
2017-11-17 00:38:45 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
'downloader/exception_type_count/exceptions.OSError': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 11, 17, 5, 38, 45, 328366),
'log_count/ERROR': 1,
'log_count/INFO': 7,
'log_count/WARNING': 1,
'memusage/max': 33509376,
'memusage/startup': 33509376,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 11, 17, 5, 38, 45, 112667)}
2017-11-17 00:38:45 [scrapy.core.engine] INFO: Spider closed (finished)

最佳答案

怎么了

  • rakuten.com 已与具有反蜘蛛功能的 Google Analytics 集成。
  • 如果您的请求无法正确处理 rakuten.comanalytics.js,您将被阻止访问该网站并获得 403 错误代码。

如何修复

Use Javascript rendering technique

  • 解决方案 1:(将 scrapy 与 scrapy-splash 集成)

    • Here is Scrapy-splash github repository
    • 从 pypi 安装 scrapy-splash:

      pip install scrapy-splash
    • Install Docker to your machine
    • 运行一个 scrapy-splash 容器:

      docker run -p 8050:8050 scrapinghub/splash
    • 将以下行添加到您的 settings.py

      SPLASH_URL = 'http://192.168.59.103:8050'
    • 将启动下载中间件附加到您的 settings.py

      DOWNLOADER_MIDDLEWARES = {
      'scrapy_splash.SplashCookiesMiddleware': 723,
      'scrapy_splash.SplashMiddleware': 725,
      'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
      }
    • 将您的蜘蛛代码更改为

      import scrapy    
      from scrapy_splash import SplashRequest


      class RakutenSpider(scrapy.Spider):
      name = "rak"
      allowed_domains = ["rakuten.com"]
      start_urls = ['https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore']

      def start_requests(self):
      for url in self.start_urls:
      yield SplashRequest(url, self.parse, args={'wait': 0.5})

      def parse(self, response):
      for sel in response.xpath('//div[@class="page-bottom"]/div'):
      yield {
      'titles': sel.xpath("//div[@class='slider-prod-title']").extract_first(),
      'prices': sel.xpath("//span[@class='price-bold']").extract_first(),
      'images': sel.xpath("//div[@class='deal-img']/img").extract_first()
      }
  • 解决方案 2:(将 scrapy 与 selenium webdriver 作为中间件集成)

    • Selenium web driver python binding documentation
    • 从 pypi 安装 Selenium:

      pip install selenium
    • 如果您想使用 Firefox 浏览器,请将 Firefox 的 Geckodriver 安装到您的 PATH
    • 如果您想使用 Chrome 浏览器,请将 Chrome 驱动程序安装到您的 PATH
    • 如果您想使用 PhantomJS 浏览器,请从 Homebrew 安装 phantomJS

         brew install phantomjs
    • JSmiddleware 类添加到您的 middlewares.py

          from scrapy.http import HtmlResponse
      from selenium import webdriver


      class JSMiddleware(object):
      def process_request(self, request, spider):
      driver = webdriver.Firefox()
      driver.get(request.url)

      body = driver.page_source
      return HtmlResponse(driver.current_url, body=body, encoding='utf-8', request=request)
    • selenium 下载中间件 附加到您的 settings.py

          DOWNLOADER_MIDDLEWARES = {
      'youproject.middlewares.JSMiddleware': 200
      }
    • 使用您的原始蜘蛛代码

          import scrapy    


      class RakutenSpider(scrapy.Spider):
      name = "rak"
      allowed_domains = ["rakuten.com"]
      start_urls = ['https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore']

      def parse(self, response):
      for sel in response.xpath('//div[@class="page-bottom"]/div'):
      yield {
      'titles': sel.xpath("//div[@class='slider-prod-title']").extract_first(),
      'prices': sel.xpath("//span[@class='price-bold']").extract_first(),
      'images': sel.xpath("//div[@class='deal-img']/img").extract_first()
      }

更多

关于Scrapy: USER_AGENT 和 ROBOTSTXT_OBEY 设置正确,但我仍然得到错误 403,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47315699/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com