gpt4 book ai didi

python - 在 Scrapinghub 上运行蜘蛛时如何保存下载的文件?

转载 作者:太空狗 更新时间:2023-10-29 21:55:57 24 4
gpt4 key购买 nike

stockInfo.py 包含:

import scrapy
import re
import pkgutil

class QuotesSpider(scrapy.Spider):
name = "stockInfo"
data = pkgutil.get_data("tutorial", "resources/urls.txt")
data = data.decode()
start_urls = data.split("\r\n")

def parse(self, response):
company = re.findall("[0-9]{6}",response.url)[0]
filename = '%s_info.html' % company
with open(filename, 'wb') as f:
f.write(response.body)

在窗口的 cmd 中执行蜘蛛 stockInfo

d:
cd tutorial
scrapy crawl stockInfo

现在所有在 resources/urls.txt 中的 url 的网页将被下载到目录 d:/tutorial 中。

然后将蜘蛛部署到Scrapinghub,并运行stockInfo spider

enter image description here

没有错误,请问下载的网页在哪里?
Scrapinghub 中如何执行以下命令行?

        with open(filename, 'wb') as f:
f.write(response.body)

如何将数据保存在 scrapinghub 中,并在作业完成后从 scrapinghub 下载?

首先要安装scrapinghub。

pip install scrapinghub[msgpack]

重写为 Thiago Curvelo 说,然后将其部署在我的 scrapinghub 中。

Deploy log location: C:\Users\dreams\AppData\Local\Temp\shub_deploy_yzstvtj8.log
Error: Deploy failed: b'{"status": "error", "message": "Internal error"}'
_get_apisettings, commands_module='sh_scrapy.commands')
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 148, in _run_usercode
_run(args, settings)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 103, in _run
_run_scrapy(args, settings)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 111, in _run_scrapy
execute(settings=settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 148, in execute
cmd.crawler_process = CrawlerProcess(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 243, in __init__
super(CrawlerProcess, self).__init__(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 134, in __init__
self.spider_loader = _get_spider_loader(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 330, in _get_spider_loader
return loader_cls.from_settings(settings.frozencopy())
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 61, in from_settings
return cls(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 25, in __init__
self._load_all_spiders()
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 47, in _load_all_spiders
for module in walk_modules(name):
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
submod = import_module(fullpath)
File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/app/__main__.egg/mySpider/spiders/stockInfo.py", line 4, in <module>
ImportError: cannot import name ScrapinghubClient
{"message": "shub-image-info exit code: 1", "details": null, "error": "image_info_error"}
{"status": "error", "message": "Internal error"}

requirements.txt 只包含一行:

scrapinghub[msgpack]

scrapinghub.yml 包含:

project: 123456
requirements:
file: requirements.tx

现在部署它。

D:\mySpider>shub deploy 123456
Packing version 1.0
Deploying to Scrapy Cloud project "123456"
Deploy log last 30 lines:

Deploy log location: C:\Users\dreams\AppData\Local\Temp\shub_deploy_4u7kb9ml.log
Error: Deploy failed: b'{"status": "error", "message": "Internal error"}'
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 148, in _run_usercode
_run(args, settings)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 103, in _run
_run_scrapy(args, settings)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 111, in _run_scrapy
execute(settings=settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 148, in execute
cmd.crawler_process = CrawlerProcess(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 243, in __init__
super(CrawlerProcess, self).__init__(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 134, in __init__
self.spider_loader = _get_spider_loader(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 330, in _get_spider_loader
return loader_cls.from_settings(settings.frozencopy())
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 61, in from_settings
return cls(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 25, in __init__
self._load_all_spiders()
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 47, in _load_all_spiders
for module in walk_modules(name):
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
submod = import_module(fullpath)
File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/tmp/unpacked-eggs/__main__.egg/mySpider/spiders/stockInfo.py", line 5, in <module>
from scrapinghub import ScrapinghubClient
ImportError: cannot import name ScrapinghubClient
{"message": "shub-image-info exit code: 1", "details": null, "error": "image_info_error"}
{"status": "error", "message": "Internal error"}

1.问题依旧。

ImportError: cannot import name ScrapinghubClient

2.我本地电脑只安装了python3.7和win7,为什么报错信息是这样的:

File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules

是scrapinghub(远端)上的错误信息吗?只是发送到我的本地端显示?

最佳答案

如今,在云环境中将数据写入磁盘并不可靠,因为每个人都在使用容器,而且它们是短暂的。

但是您可以使用 Scrapinghub 的 Collection API 保存您的数据.您可以直接通过端点或使用此包装器使用它:https://python-scrapinghub.readthedocs.io/en/latest/

使用 python-scrapinghub,您的代码将如下所示:

from scrapinghub import ScrapinghubClient
from contextlib import closing

project_id = '12345'
apikey = 'XXXX'
client = ScrapinghubClient(apikey)
store = client.get_project(project_id).collections.get_store('mystuff')

#...

def parse(self, response):
company = re.findall("[0-9]{6}",response.url)[0]
with closing(store.create_writer()) as writer:
writer.write({
'_key': company,
'body': response.body}
)

将某些内容保存到集合中后,一个链接将出现在您的仪表板中:

collections

编辑:

为确保依赖项将安装在云中 (scrapinghub[msgpack]),将它们添加到您的 requirements.txtPipfile 并将其包含在 scrapinghub.yml 文件中。例如:

# project_directory/scrapinghub.yml

projects:
default: 12345

stacks:
default: scrapy:1.5-py3

requirements:
file: requirements.txt

( https://shub.readthedocs.io/en/stable/deploying.html#deploying-dependencies )

因此,scrapinghub(云服务)将安装 scrapinghub(python 库)。 :)

希望对你有帮助。

关于python - 在 Scrapinghub 上运行蜘蛛时如何保存下载的文件?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55196857/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com