gpt4 book ai didi

python - Scrapy:如何让两个爬虫依次运行?

转载 作者:行者123 更新时间:2023-11-28 21:17:53 26 4
gpt4 key购买 nike

我在同一个项目中有两个蜘蛛。其中一个依赖于另一个先运行。他们使用不同的管道。如何确保它们按顺序运行?

最佳答案

来自文档:https://doc.scrapy.org/en/1.2/topics/request-response.html

相同的示例,但通过链接 deferreds 顺序运行蜘蛛:

from twisted.internet import reactor, defer
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging

class MySpider1(scrapy.Spider):
# Your first spider definition
...

class MySpider2(scrapy.Spider):
# Your second spider definition
...

configure_logging()
runner = CrawlerRunner()

@defer.inlineCallbacks
def crawl():
yield runner.crawl(MySpider1)
yield runner.crawl(MySpider2)
reactor.stop()

crawl()
reactor.run() # the script will block here until the last crawl call is finished

关于python - Scrapy:如何让两个爬虫依次运行?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27408880/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com