gpt4 book ai didi

python - 如何在没有设置文件的情况下更改 scrapy 用户代理

转载 作者:行者123 更新时间:2023-12-04 03:12:08 24 4
gpt4 key购买 nike

我已经像主要示例一样通过脚本实现了我的蜘蛛:

import scrapy

class BlogSpider(scrapy.Spider):
name = 'blogspider'
start_urls = ['https://blog.scrapinghub.com']

def parse(self, response):
for title in response.css('h2.entry-title'):
yield {'title': title.css('a ::text').extract_first()}

next_page = response.css('div.prev-post > a ::attr(href)').extract_first()
if next_page:
yield scrapy.Request(response.urljoin(next_page), callback=self.parse)

我运行:

scrapy runspider myspider.py

如果我没有设置或从 startproject 创建,如何更改用户代理?如此处指定:

https://doc.scrapy.org/en/latest/topics/settings.html

最佳答案

您可以在请求中手动添加 header ,以便您可以指定自定义 User Agent

在您的蜘蛛文件中,当您请求时:

yield scrapy.Request(self.start_urls, callback=self.parse, headers={"User-Agent": "Your Custom User Agent"})

所以你的蜘蛛会是这样的:

class BlogSpider(scrapy.Spider):
name = 'blogspider'
start_urls = ['https://blog.scrapinghub.com']

def start_requests(self):
yield scrapy.Request(self.start_urls, callback=self.parse, headers={"User-Agent": "Your Custom User Agent"})

def parse(self, response):
for title in response.css('h2.entry-title'):
yield {'title': title.css('a ::text').extract_first()}

next_page = response.css('div.prev-post > a ::attr(href)').extract_first()
if next_page:
yield scrapy.Request(response.urljoin(next_page), callback=self.parse, headers={"User-Agent": "Your Custom User Agent"})

关于python - 如何在没有设置文件的情况下更改 scrapy 用户代理,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44272803/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com