gpt4 book ai didi

python - Scrapy 将 [ 输出到我的 .json 文件中

转载 作者:太空宇宙 更新时间:2023-11-04 03:35:13 31 4
gpt4 key购买 nike

这里是真正的 Scrapy 和 Python 菜鸟,所以请耐心等待任何愚蠢的错误。我正在尝试编写一个蜘蛛程序来递归地抓取新闻站点并返回文章的标题、日期和第一段。我设法为一个项目抓取了一个页面,但当我尝试扩展到超出该项目时,一切都出错了。

我的蜘蛛:

import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import Selector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from basic.items import BasicItem

class BasicSpiderSpider(CrawlSpider):
name = "basic_spider"
allowed_domains = ["news24.com/"]
start_urls = (
'http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328',
)

rules = (Rule (SgmlLinkExtractor(allow=("", ))
, callback="parse_items", follow= True),
)
def parse_items(self, response):
hxs = Selector(response)
titles = hxs.xpath('//*[@id="aspnetForm"]')
items = []
item = BasicItem()
item['Headline'] = titles.xpath('//*[@id="article_special"]//h1/text()').extract()
item["Article"] = titles.xpath('//*[@id="article-body"]/p[1]/text()').extract()
item["Date"] = titles.xpath('//*[@id="spnDate"]/text()').extract()
items.append(item)
return items

我仍然遇到同样的问题,尽管我注意到每次我尝试运行蜘蛛时都有一个“[”,为了找出问题所在,我运行了以下命令:

c:\Scrapy Spiders\basic>scrapy 解析 --spider=basic_spider -c parse_items -d 2 -v http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328

这给了我以下输出:

2015-03-30 15:28:21+0200 [scrapy] INFO: Scrapy 0.24.5 started (bot: basic)
2015-03-30 15:28:21+0200 [scrapy] INFO: Optional features available: ssl, http11
2015-03-30 15:28:21+0200 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'basic.spiders', 'SPIDER_MODULES': ['basic.spiders'], 'DEPTH_LIMIT': 1, 'DOW
NLOAD_DELAY': 2, 'BOT_NAME': 'basic'}
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, D
efaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddl
eware
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled item pipelines:
2015-03-30 15:28:21+0200 [basic_spider] INFO: Spider opened
2015-03-30 15:28:21+0200 [basic_spider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-03-30 15:28:21+0200 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-03-30 15:28:21+0200 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-03-30 15:28:22+0200 [basic_spider] DEBUG: Crawled (200) <GET http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328>
(referer: None)
2015-03-30 15:28:22+0200 [basic_spider] INFO: Closing spider (finished)
2015-03-30 15:28:22+0200 [basic_spider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 282,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 145301,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 3, 30, 13, 28, 22, 177000),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2015, 3, 30, 13, 28, 21, 878000)}
2015-03-30 15:28:22+0200 [basic_spider] INFO: Spider closed (finished)

>>> DEPTH LEVEL: 1 <<<
# Scraped Items ------------------------------------------------------------
[{'Article': [u'Johannesburg - Fifty-six children were taken to\nPietermaritzburg hospitals after showing signs of food poisoning while at\nschool, KwaZulu-Na
tal emergency services said on Friday.'],
'Date': [u'2015-03-28 07:30'],
'Headline': [u'56 children hospitalised for food poisoning']}]
# Requests -----------------------------------------------------------------
[]

因此,我可以看到项目正在被抓取,但没有将可用的项目数据放入 json 文件中。这就是我运行 scrapy 的方式:

scrapy crawl basic_spider -o test.json

我一直在查看最后一行(return items),因为将其更改为 yield 或 print 不会让我在解析中抓取掉任何项目。

最佳答案

这通常意味着没有任何内容被抓取,没有项目被提取

在您的情况下,修复您的 allowed_domains 设置:

allowed_domains = ["news24.com"]

除此之外,从完美主义者那里稍微清理一下:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor


class BasicSpiderSpider(CrawlSpider):
name = "basic_spider"
allowed_domains = ["news24.com"]
start_urls = [
'http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328',
]

rules = [
Rule(LinkExtractor(), callback="parse_items", follow=True),
]

def parse_items(self, response):
for title in response.xpath('//*[@id="aspnetForm"]'):
item = BasicItem()
item['Headline'] = title.xpath('//*[@id="article_special"]//h1/text()').extract()
item["Article"] = title.xpath('//*[@id="article-body"]/p[1]/text()').extract()
item["Date"] = title.xpath('//*[@id="spnDate"]/text()').extract()
yield item

关于python - Scrapy 将 [ 输出到我的 .json 文件中,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29348425/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com