gpt4 book ai didi

python - 暂停和恢复工作在 scrapy 项目中不起作用

转载 作者:太空狗 更新时间:2023-10-30 00:17:25 26 4
gpt4 key购买 nike

我正在做一个 scrapy 项目,从需要身份验证的站点下载图像。一切正常,我可以下载图像。我需要的是暂停和恢复蜘蛛在需要时抓取图像。所以我使用了 scrapy 手册中提到的任何内容,如下所示。在运行蜘蛛时使用了下面提到的查询

scrapy crawl somespider -s JOBDIR=crawls/somespider-1

要中止引擎,请按 CTRL+C。要再次恢复使用相同的命令。

但是在恢复之后,蜘蛛会在几分钟内关闭,它不会从停止的地方恢复。

更新:

class SampleSpider(Spider):
name = "sample project"
allowed_domains = ["xyz.com"]
start_urls = (
'http://abcyz.com/',
)

def parse(self, response):
return FormRequest.from_response(response,
formname='Loginform',
formdata={'username': 'Name',
'password': '****'},
callback=self.after_login)

def after_login(self, response):
# check login succeed before going on
if "authentication error" in str(response.body).lower():
print "I am error"
return
else:
start_urls = ['..','..']
for url in start_urls:
yield Request(url=urls,callback=self.parse_phots,dont_filter=True)
def parse_photos(self,response):
**downloading image here**

我做错了什么?

这是我在暂停后运行蜘蛛时得到的日志

2014-05-13 15:40:31+0530 [scrapy] INFO: Scrapy 0.22.0 started (bot: sampleproject)
2014-05-13 15:40:31+0530 [scrapy] INFO: Optional features available: ssl, http11, boto, django
2014-05-13 15:40:31+0530 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'sampleproject.spiders', 'SPIDER_MODULES': ['sampleproject.spiders'], 'BOT_NAME': 'sampleproject'}
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled downloader middlewares: RedirectMiddleware, HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled item pipelines: ImagesPipeline
2014-05-13 15:40:31+0530 [sample] INFO: Spider opened
2014-05-13 15:40:31+0530 [sample] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-05-13 15:40:31+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-05-13 15:40:31+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080

......................

2014-05-13 15:42:06+0530 [sample] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 141184,
'downloader/request_count': 413,
'downloader/request_method_count/GET': 412,
'downloader/request_method_count/POST': 1,
'downloader/response_bytes': 11213203,
'downloader/response_count': 413,
'downloader/response_status_count/200': 412,
'downloader/response_status_count/404': 1,
'file_count': 285,
'file_status_count/downloaded': 285,
'finish_reason': 'shutdown',
'finish_time': datetime.datetime(2014, 5, 13, 10, 12, 6, 534088),
'item_scraped_count': 125,
'log_count/DEBUG': 826,
'log_count/ERROR': 1,
'log_count/INFO': 9,
'log_count/WARNING': 219,
'request_depth_max': 12,
'response_received_count': 413,
'scheduler/dequeued': 127,
'scheduler/dequeued/disk': 127,
'scheduler/enqueued': 403,
'scheduler/enqueued/disk': 403,
'start_time': datetime.datetime(2014, 5, 13, 10, 10, 31, 232618)}
2014-05-13 15:42:06+0530 [sample] INFO: Spider closed (shutdown)

恢复后它停止并显示

INFO: Scrapy 0.22.0 started (bot: sampleproject)
2014-05-13 15:42:32+0530 [scrapy] INFO: Optional features available: ssl, http11, boto, django
2014-05-13 15:42:32+0530 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'sampleproject.spiders', 'SPIDER_MODULES': ['sampleproject.spiders'], 'BOT_NAME': 'sampleproject'}
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled downloader middlewares: RedirectMiddleware, HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled item pipelines: ImagesPipeline
2014-05-13 15:42:32+0530 [sample] INFO: Spider opened
*2014-05-13 15:42:32+0530 [sample] INFO: Resuming crawl (276 requests scheduled)*
2014-05-13 15:42:32+0530 [sample] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-05-13 15:42:32+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-05-13 15:42:32+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080


2014-05-13 15:43:19+0530 [sample] INFO: Closing spider (finished)
2014-05-13 15:43:19+0530 [sample] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 3,
'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 3,
'downloader/request_bytes': 132365,
'downloader/request_count': 281,
'downloader/request_method_count/GET': 281,
'downloader/response_bytes': 567884,
'downloader/response_count': 278,
'downloader/response_status_count/200': 278,
'file_count': 1,
'file_status_count/downloaded': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 5, 13, 10, 13, 19, 554981),
'item_scraped_count': 276,
'log_count/DEBUG': 561,
'log_count/ERROR': 1,
'log_count/INFO': 8,
'log_count/WARNING': 1,
'request_depth_max': 1,
'response_received_count': 278,
'scheduler/dequeued': 277,
'scheduler/dequeued/disk': 277,
'scheduler/enqueued': 1,
'scheduler/enqueued/disk': 1,
'start_time': datetime.datetime(2014, 5, 13, 10, 12, 32, 659276)}
2014-05-13 15:43:19+0530 [sample] INFO: Spider closed (finished)

最佳答案

您可以运行以下命令,而不是您编写的命令:

scrapy crawl somespider --set JOBDIR=crawl1

要停止它,您必须运行一次 control-C!并等待 scrapy 停止。如果您运行 control-C 两次,它将无法正常工作!

然后为了恢复搜索再次运行这个命令:

scrapy crawl somespider --set JOBDIR=crawl1

关于python - 暂停和恢复工作在 scrapy 项目中不起作用,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23628605/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com