gpt4 book ai didi

python - 从脚本运行 Scrapy - 挂起

转载 作者:太空狗 更新时间:2023-10-29 21:44:30 25 4
gpt4 key购买 nike

我正在尝试从讨论的脚本运行 scrapy here .它建议使用 this片段,但当我这样做时,它会无限期地挂起。这是在 .10 版本中写回的;它仍然与当前的稳定版兼容吗?

最佳答案

from scrapy import signals, log
from scrapy.xlib.pydispatch import dispatcher
from scrapy.crawler import CrawlerProcess
from scrapy.conf import settings
from scrapy.http import Request

def handleSpiderIdle(spider):
'''Handle spider idle event.''' # http://doc.scrapy.org/topics/signals.html#spider-idle
print '\nSpider idle: %s. Restarting it... ' % spider.name
for url in spider.start_urls: # reschedule start urls
spider.crawler.engine.crawl(Request(url, dont_filter=True), spider)

mySettings = {'LOG_ENABLED': True, 'ITEM_PIPELINES': 'mybot.pipeline.validate.ValidateMyItem'} # global settings http://doc.scrapy.org/topics/settings.html

settings.overrides.update(mySettings)

crawlerProcess = CrawlerProcess(settings)
crawlerProcess.install()
crawlerProcess.configure()

class MySpider(BaseSpider):
start_urls = ['http://site_to_scrape']
def parse(self, response):
yield item

spider = MySpider() # create a spider ourselves
crawlerProcess.queue.append_spider(spider) # add it to spiders pool

dispatcher.connect(handleSpiderIdle, signals.spider_idle) # use this if you need to handle idle event (restart spider?)

log.start() # depends on LOG_ENABLED
print "Starting crawler."
crawlerProcess.start()
print "Crawler stopped."

更新:

如果您还需要对每个蜘蛛进行设置,请参阅此示例:

for spiderConfig in spiderConfigs:
spiderConfig = spiderConfig.copy() # a dictionary similar to the one with global settings above
spiderName = spiderConfig.pop('name') # name of the spider is in the configs - i can use the same spider in several instances - giving them different names
spiderModuleName = spiderConfig.pop('spiderClass') # module with the spider is in the settings
spiderModule = __import__(spiderModuleName, {}, {}, ['']) # import that module
SpiderClass = spiderModule.Spider # spider class is named 'Spider'
spider = SpiderClass(name = spiderName, **spiderConfig) # create the spider with given particular settings
crawlerProcess.queue.append_spider(spider) # add the spider to spider pool

蜘蛛文件中的设置示例:

name = punderhere_com    
allowed_domains = plunderhere.com
spiderClass = scraper.spiders.plunderhere_com
start_urls = http://www.plunderhere.com/categories.php?

关于python - 从脚本运行 Scrapy - 挂起,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6494067/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com