gpt4 book ai didi

python - Scrapy CrawlerProcess 不使用 CrawlSpider 保存数据

转载 作者:行者123 更新时间:2023-11-28 19:05:36 25 4
gpt4 key购买 nike

将执行以下代码,创建没有错误的文件。但是,它没有保存到 json 文件中。

我关闭了 autothrottle,它在过去会干扰下载数据,但并没有解决问题。

Scrapy==1.4.0

class MySpider(CrawlSpider):
name = "spidy"
allowed_domains = ["cnn.com"]
start_urls = ["http://www.cnn.com"]

rules = [Rule(LinkExtractor(allow=['cnn.com/.+']), callback='parse_item', follow=True)]

def parse_item(self, response):

print('went to: {}'.format(response.url))

yield {'url': response.url}

FILE_NAME = 'my_data.json'
SETTINGS = {
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'FEED_FORMAT': 'json',
'FEED_URI': FILE_NAME,
}

process = CrawlerProcess(SETTINGS)
process.crawl(MySpider)
process.start()

编辑:

抓取器正在获取日志中显示的数据:

2017-11-21 11:07:55 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
2017-11-21 11:07:55 [scrapy.utils.log] INFO: Overridden settings: {'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)', 'FEED_URI': 'my_data.json', 'FEED_FORMAT': 'json'}
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.feedexport.FeedExporter']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-11-21 11:07:55 [scrapy.core.engine] INFO: Spider opened
2017-11-21 11:07:55 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-11-21 11:07:55 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6041
2017-11-21 11:07:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com> (referer: None)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/us> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/congress-capitol-hill> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/president-donald-trump-45> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/us-security> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/trumpmerica> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/state-cnn-politics-magazine> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/opinion/opinion-social-issues> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/opinions/cnnireport> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/vr/vr-archives> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/middle-east> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://imagesource.cnn.com> from <GET http://www.cnn.com/collection>
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/supreme-court-nine> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://transcripts.cnn.com/TRANSCRIPTS/> from <GET http://www.cnn.com/transcripts>
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/pf/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/luxury/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/data/markets/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/technology/> (referer: http://www.cnn.com)
went to: http://www.cnn.com/us
2017-11-21 11:07:56 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.cnn.com/us>
{'url': 'http://www.cnn.com/us'}
2017-11-21 11:07:56 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET http://www.cnn.com/us> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://www.cnn.com/email/subscription> from <GET http://www.cnn.com/newsletters>
...

我们可以看到抓取器正在访问 URL,抓取页面上的其他 URL,获取响应 URL(如“went to:”所示),然后使用“{'url':}”返回数据,例如 {'网址':'http://www.cnn.com/us'

最佳答案

所以你的代码本身工作正常,但我假设你停止它两次或杀死它,这使得 json 空白。我会改变两件事。

一个使用 jsonlines 而不是 json。这将确保即使我杀死蜘蛛我也不会丢失太多元素。然后每一行本身都是一个有效的 JSON,所以我可以附加到同一个文件。此外,如果您在中间中断程序,您将获得无效的 JSON

其次,我会将并发项目设置为较低的值,以便更频繁地导出项目(默认值为 100)

from scrapy.crawler import CrawlerProcess
from scrapy.spiders import CrawlSpider, Rule

from scrapy.linkextractor import LinkExtractor

class MySpider(CrawlSpider):
name = "spidy"
allowed_domains = ["cnn.com"]
start_urls = ["http://www.cnn.com"]

rules = [Rule(LinkExtractor(allow=['cnn.com/.+']), callback='parse_item', follow=True)]

def parse_item(self, response):

print('went to: {}'.format(response.url))

yield {'url': response.url}

FILE_NAME = 'my_data.jsonl'
SETTINGS = {
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'FEED_FORMAT': 'jsonlines',
'FEED_URI': FILE_NAME,
'CONCURRENT_ITEMS': 1
}

process = CrawlerProcess(SETTINGS)
process.crawl(MySpider)
process.start()

之后您会发现数据确实可以正常导出

items exported

关于python - Scrapy CrawlerProcess 不使用 CrawlSpider 保存数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47417332/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com