gpt4 book ai didi

python - scrapyd deploy 显示 0 个蜘蛛

转载 作者:太空狗 更新时间:2023-10-30 01:09:34 25 4
gpt4 key购买 nike

我正在为一个项目使用 scrapy。我运行了以下命令来部署项目:

$scrapy deploy -l

然后我得到以下 o/p:

scrapysite http://localhost:6800/

$cat scrapy.cfg

[settings] 
default = scrapBib.settings

[deploy:scrapysite]
url = http://localhost:6800/
project = scrapBib

$scrapy deploy scrapysite -p scrapBib

'Building egg of scrapBib-1346242513
'build/lib.linux-x86_64-2.7' does not exist -- can't clean it

'build/bdist.linux-x86_64' does not exist -- can't clean it

'build/scripts-2.7' does not exist -- can't clean it

zip_safe flag not set; analyzing archive contents...

Deploying scrapBib-1346242513 to `http://localhost:6800/addversion.json`

2012-08-29 17:45:14+0530 [HTTPChannel,22,127.0.0.1] 127.0.0.1 - - [29/Aug/2012:12:15:13

+0000] "POST /addversion.json HTTP/1.1" 200 79 "-" "Python-urllib/2.7"

Server response (200):

{"status": "ok", "project": "scrapBib", "version": "1346242513", "spiders": 0}

如您所见,尽管我在 project/spiders/文件夹中编写了 3 个蜘蛛,但将蜘蛛设为 0 。结果我无法使用 curl 请求开始爬网。请帮忙

最佳答案

我也遇到过一次,做两件事

1) 从本地系统中删除 project.egg-infobuildsetup.py

2) 从您的服务器中删除所有已部署的版本。

然后尝试部署它会被修复...

关于python - scrapyd deploy 显示 0 个蜘蛛,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/12177845/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com