gpt4 book ai didi

python - Scrapy - "scrapy crawl"在内部捕获异常并将它们隐藏在 Jenkins 的 "catch"子句中

转载 作者:太空宇宙 更新时间:2023-11-04 04:04:30 24 4
gpt4 key购买 nike

我每天都通过 Jenkins 运行 scrapy,我希望通过电子邮件将异常情况发送给我。

这是一个示例蜘蛛:

class ExceptionTestSpider(Spider):
name = 'exception_test'

start_urls = ['http://google.com']

def parse(self, response):
raise Exception

这是.Jenkinsfile:

#!/usr/bin/env groovy
try {
node ('jenkins-small-py3.6'){
...
stage('Execute Spider') {
cd ...
/usr/local/bin/scrapy crawl exception_test
}
}
} catch (exc) {
echo "Caught: ${exc}"
mail subject: "...",
body: "The spider is failing",
to: "...",
from: "..."

/* Rethrow to fail the Pipeline properly */
throw exc
}

这是日志:

...
INFO:scrapy.core.engine:Spider opened
2019-08-22 10:49:49 [scrapy.core.engine] INFO: Spider opened
INFO:scrapy.extensions.logstats:Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-08-22 10:49:49 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
DEBUG:scrapy.extensions.telnet:Telnet console listening on 127.0.0.1:6023
DEBUG:scrapy.downloadermiddlewares.redirect:Redirecting (301) to <GET http://www.google.com/> from <GET http://google.com>
DEBUG:scrapy.core.engine:Crawled (200) <GET http://www.google.com/> (referer: None)
ERROR:scrapy.core.scraper:Spider error processing <GET http://www.google.com/> (referer: None)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/twisted/internet/defer.py", line 654, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "...", line ..., in parse
raise Exception
Exception
2019-08-22 10:49:50 [scrapy.core.scraper] ERROR: Spider error processing <GET http://www.google.com/> (referer: None)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/twisted/internet/defer.py", line 654, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "...", line ..., in parse
raise Exception
Exception
INFO:scrapy.core.engine:Closing spider (finished)
2019-08-22 10:49:50 [scrapy.core.engine] INFO: Closing spider (finished)
INFO:scrapy.statscollectors:Dumping Scrapy stats:
{
...
}
INFO:scrapy.core.engine:Spider closed (finished)
2019-08-22 10:49:50 [scrapy.core.engine] INFO: Spider closed (finished)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

并且没有邮件被发送。我相信 Scrapy 会在内部捕获异常,将其保存到日志中,然后无错误地退出。

如何让 Jenkins 获得异常?

最佳答案

问题是,当 scrape 失败时,scrapy 不会使用非零退出代码 (src: https://github.com/scrapy/scrapy/issues/1231)。

正如该期评论者所说,我建议您添加一个自定义命令 ( http://doc.scrapy.org/en/master/topics/commands.html#custom-project-commands )。

关于python - Scrapy - "scrapy crawl"在内部捕获异常并将它们隐藏在 Jenkins 的 "catch"子句中,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57608702/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com