gpt4 book ai didi

python - scrapy 蜘蛛日志未写入日志文件

转载 作者:行者123 更新时间:2023-12-01 05:35:52 25 4
gpt4 key购买 nike

我有一个派生自BaseSpider的蜘蛛类。我调用 self.log 但没有任何内容写入日志文件。我在命令行 LOG_FILELOG_LEVEL 上配置日志文件,但蜘蛛日志输出未写入该文件。如何将蜘蛛日志写入普通日志文件?

最佳答案

您确定您的回调被调用了吗?

因为在文件 example.py 中有这个简单的蜘蛛:

from scrapy.spider import BaseSpider

class ExampleSpider(BaseSpider):
name = "example"
start_urls = ['http://www.example.com/']

def parse(self, response):
self.log('************* my log ***********')

并使用scrapy runningpider example.py --set LOG_FILE=logfile运行它,这是文件内容:

2013-09-30 22:55:12-0400 [scrapy] INFO: Scrapy 0.16.5 started (bot: mybot)
2013-09-30 22:55:12-0400 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-09-30 21:55:12-0500 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-09-30 21:55:12-0500 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-09-30 21:55:12-0500 [scrapy] DEBUG: Enabled item pipelines: MybotPipeline
2013-09-30 21:55:12-0500 [example] INFO: Spider opened
2013-09-30 21:55:12-0500 [example] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-09-30 21:55:12-0500 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-09-30 21:55:12-0500 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-09-30 21:55:13-0500 [example] DEBUG: Crawled (200) <GET http://www.example.com/> (referer: None)
2013-09-30 21:55:13-0500 [example] DEBUG: ************* my log ***********
2013-09-30 21:55:13-0500 [example] INFO: Closing spider (finished)
2013-09-30 21:55:13-0500 [example] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 221,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 1611,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2013, 10, 1, 2, 55, 13, 315807),
'log_count/DEBUG': 8,
'log_count/INFO': 4,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2013, 10, 1, 2, 55, 12, 991150)}
2013-09-30 21:55:13-0500 [example] INFO: Spider closed (finished)

尝试在回调中添加失败以确保被调用。像引发异常这样简单的事情。如果运行时没有出现异常,则不会调用您的回调。

关于python - scrapy 蜘蛛日志未写入日志文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19093435/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com