gpt4 book ai didi

python - 为什么我的 scrapy 蜘蛛不抓取任何东西?

转载 作者:行者123 更新时间:2023-12-01 04:09:05 25 4
gpt4 key购买 nike

我不知道问题出在哪里,可能非常容易解决,因为我是 scrapy 的新手。感谢您的帮助!

我的蜘蛛:

from scrapy.spiders import CrawlSpider, Rule
from scrapy.selector import HtmlXPathSelector
from scrapy.linkextractors import LinkExtractor
from scrapy.item import Item

class ArticleSpider(CrawlSpider):
name = "article"
allowed_domains = ["economist.com"]
start_urls = ['http://www.economist.com/sections/science-technology']

rules = [
Rule(LinkExtractor(restrict_xpaths='//article'), callback='parse_item', follow=True),
]

def parse_item(self, response):
for sel in response.xpath('//div/article'):
item = scrapy.Item()
item ['title'] = sel.xpath('a/text()').extract()
item ['link'] = sel.xpath('a/@href').extract()
item ['desc'] = sel.xpath('text()').extract()
return item

项目:

import scrapy

class EconomistItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
desc = scrapy.Field()

部分日志:

INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
Crawled (200) <GET http://www.economist.com/sections/science-technology> (referer: None)

编辑:

在我添加了alecxe提出的更改后,出现了另一个问题:

日志:

[scrapy] DEBUG: Crawled (200) <GET http://www.economist.com/news/science-and-technology/21688848-stem-cells-are-starting-prove-their-value-medical-treatments-curing-multiple> (referer: http://www.economist.com/sections/science-technology)
2016-02-04 14:05:01 [scrapy] DEBUG: Crawled (200) <GET http://www.economist.com/news/science-and-technology/21689501-beating-go-champion-machine-learning-computer-says-go> (referer: http://www.economist.com/sections/science-technology)
2016-02-04 14:05:02 [scrapy] ERROR: Spider error processing <GET http://www.economist.com/news/science-and-technology/21688848-stem-cells-are-starting-prove-their-value-medical-treatments-curing-multiple> (referer: http://www.economist.com/sections/science-technology)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
yield next(it)
File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 28, in process_spider_output
for x in result:
File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 54, in <genexpr>
return (r for r in result or () if _filter(r))
File "/usr/local/lib/python2.7/site-packages/scrapy/spiders/crawl.py", line 67, in _parse_response
cb_res = callback(response, **cb_kwargs) or ()
File "/Users/FvH/Desktop/Python/projects/economist/economist/spiders/article.py", line 18, in parse_item
item = scrapy.Item()
NameError: global name 'scrapy' is not defined

设置:

BOT_NAME = 'economist'

SPIDER_MODULES = ['economist.spiders']
NEWSPIDER_MODULE = 'economist.spiders'
USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36"

如果我想将数据导出到 csv 文件中,它显然是空的。

谢谢

最佳答案

parse_item 未正确缩进,应该是:

class ArticleSpider(CrawlSpider):
name = "article"
allowed_domains = ["economist.com"]
start_urls = ['http://www.economist.com/sections/science-technology']

rules = [
Rule(LinkExtractor(allow=r'Items'), callback='parse_item', follow=True),
]

def parse_item(self, response):
for sel in response.xpath('//div/article'):
item = scrapy.Item()
item ['title'] = sel.xpath('a/text()').extract()
item ['link'] = sel.xpath('a/@href').extract()
item ['desc'] = sel.xpath('text()').extract()
return item

除此之外还有两件事需要解决:

  • 链接提取部分应修复以匹配文章链接:

    Rule(LinkExtractor(restrict_xpaths='//article'), callback='parse_item', follow=True),
  • 您需要指定 USER_AGENT setting假装是一个真正的浏览器。否则,响应将不包含文章列表:

    USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36"

关于python - 为什么我的 scrapy 蜘蛛不抓取任何东西?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35192468/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com