gpt4 book ai didi

python - Scrapy 分页时间错误

转载 作者:太空宇宙 更新时间:2023-11-04 05:15:45 25 4
gpt4 key购买 nike

所以我设置了一个与 scrapy 上的示例非常相似的蜘蛛。

我希望蜘蛛在进入下一页之前抓取所有引号。我还希望它每秒只解析 1 个引号。因此,如果一页上有 20 条引语,则抓取引语需要 20 秒,然后需要 1 秒才能转到下一页。

截至目前,我当前的实现是先遍历每个页面,然后再实际获取报价信息。

import scrapy

class AuthorSpider(scrapy.Spider):
name = 'author'

start_urls = ['http://quotes.toscrape.com/']

def parse(self, response):
# follow links to author pages
for href in response.css('.author+a::attr(href)').extract():
yield scrapy.Request(response.urljoin(href),
callback=self.parse_author)

# follow pagination links
next_page = response.css('li.next a::attr(href)').extract_first()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)

def parse_author(self, response):
def extract_with_css(query):
return response.css(query).extract_first().strip()

yield {
'name': extract_with_css('h3.author-title::text'),
'birthdate': extract_with_css('.author-born-date::text'),
'bio': extract_with_css('.author-description::text'),
}

这是我的 settings.py 文件的基础

ROBOTSTXT_OBEY = True
CONCURRENT_REQUESTS = 1
DOWNLOAD_DELAY = 2

最佳答案

您可以编排如何生成 scrapy.Requests。

例如,您可以创建下一页请求,但仅在所有作者请求终止抓取其项目时才放弃它。

例子:

import scrapy

# Store common info about pending request
pending_authors = {}

class AuthorSpider(scrapy.Spider):
name = 'author'

start_urls = ['http://quotes.toscrape.com/']

def parse(self, response):

# process pagination links
next_page = response.css('li.next a::attr(href)').extract_first()
next_page_request = None
if next_page is not None:
next_page = response.urljoin(next_page)
# Create the Request object, but does not yield it now
next_page_request = scrapy.Request(next_page, callback=self.parse)

# Requests scrapping of authors, and pass reference to the Request for next page
for href in response.css('.author+a::attr(href)').extract():
pending_authors[href] = False # Marks this author as 'not processed'
yield scrapy.Request(response.urljoin(href), callback=self.parse_author,
meta={'next_page_request': next_page_request})


def parse_author(self, response):
def extract_with_css(query):
return response.css(query).extract_first().strip()

item = {
'name': extract_with_css('h3.author-title::text'),
'birthdate': extract_with_css('.author-born-date::text'),
'bio': extract_with_css('.author-description::text'),
}

# marks this author as 'processed'
pending_authors[response.url] = True

# checks if finished processing of all authors
if len([value for key, value in pending_authors.iteritems() if value == False]) == 0:
yield item
next_page_request = response.meta['next_page_request']

# Requests next page, after finishinr all authors
yield next_page_request
else:
yield item

关于python - Scrapy 分页时间错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41726335/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com