- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
几天来,我一直在尝试在 Scrapy 中创建一个爬虫,但在每个项目中,我总是遇到同样的错误:spider not found
。无论我做了什么更改或遵循哪个教程,它总是返回相同的错误。
有人可以建议我应该在哪里寻找错误吗?
谢谢!
window 10, python 2.7
C:.
│ scrapy.cfg
│
└───scrapscrapy
│ items.py
│ middlewares.py
│ pipelines.py
│ settings.py
│ settings.pyc
│ __init__.py
│ __init__.pyc
│
└───spiders
SSSpider.py
SSSpider.pyc
items.py
from scrapy.item import Item, Field
class ScrapscrapyItem(Item):
# define the fields for your item here like:
# name = scrapy.Field()
Heading = Field()
Content = Field()
Source_Website = Field()
pass
SSSpider.py
from scrapy.selector import Selector
from scrapy.spider import Spider
from Scrapscrapy.items import ScrapscrapyItem
class ScrapscrapySpider(Spider):
name="ss"
allowed_domains = ["yellowpages.md/rom/companies/info/2683-intelsmdv-srl"]
start_url = ['http://yellowpages.md/rom/companies/info/2683-intelsmdv-srl/']
def parse(self, response) :
sel = Selector (response)
item = ScrapscrapyItem()
item['Heading']=sel.xpath('/html/body/div[2]/div[2]/div/div/div/div/div[1]/div/div[2]/div/article/div/div[1]/div[2]/h2').extract
item['Content']=sel.xpath('/html/body/div[2]/div[2]/div/div/div/div/div[1]/div/div[2]/div/article/div/div[1]/div[2]/div[2]/div/div[2]/div/div[1]/div[1]').extract
item['Source_Website']= 'yellowpages.md/rom/companies/info/2683-intelsmdv-srl'
return item
设置
BOT_NAME = 'scrapscrapy'
SPIDER_MODULES = ['scrapscrapy.spiders']
NEWSPIDER_MODULE = 'scrapscrapy.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'scrapscrapy (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
命令行:
C:\Users\nastea\Desktop\scrapscrapy>scrapy crawl ss
c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py:37: RuntimeWarning:
Traceback (most recent call last):
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py", line 31, in _load_all_spiders
for module in walk_modules(name):
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\utils\misc.py", line 63, in walk_modules
mod = import_module(path)
File "c:\python27\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named spiders
Could not load spiders from module 'scrapscrapy.spiders'. Check SPIDER_MODULES setting
warnings.warn(msg, RuntimeWarning)
2017-02-19 14:21:16 [scrapy.utils.log] INFO: Scrapy 1.3.2 started (bot: scrapscrapy)
2017-02-19 14:21:16 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapscrapy.spiders', 'SPIDER_MODULES': ['scrapscrapy.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'scrapscrapy'}
Traceback (most recent call last):
File "c:\python27\Scripts\scrapy-script.py", line 11, in <module>
load_entry_point('scrapy==1.3.2', 'console_scripts', 'scrapy')()
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 162, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 190, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 194, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py", line 51, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: ss'
编辑
正如 eLRuLL 所建议的,我将 _init_.py
文件添加到 spider
文件夹中,还将 scrapy.spider 更改为 scrapy.spiders,因为它告诉我它已被删除。现在 cmd 返回的结果是这样的:
C:\Users\nastea\Desktop\scrapscrapy>scrapy crawl ss
c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py:37: RuntimeWarning:
Traceback (most recent call last):
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py", line 31, in _load_all_spiders
for module in walk_modules(name):
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\utils\misc.py", line 71, in walk_modules
submod = import_module(fullpath)
File "c:\python27\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "C:\Users\nastea\Desktop\scrapscrapy\scrapscrapy\spiders\SSSpider.py", line 3, in <module>
from Scrapscrapy.items import ScrapscrapyItem
ImportError: No module named Scrapscrapy.items
Could not load spiders from module 'scrapscrapy.spiders'. Check SPIDER_MODULES setting
warnings.warn(msg, RuntimeWarning)
2017-02-19 15:13:36 [scrapy.utils.log] INFO: Scrapy 1.3.2 started (bot: scrapscrapy)
2017-02-19 15:13:36 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapscrapy.spiders', 'SPIDER_MODULES': ['scrapscrapy.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'scrapscrapy'}
Traceback (most recent call last):
File "c:\python27\Scripts\scrapy-script.py", line 11, in <module>
load_entry_point('scrapy==1.3.2', 'console_scripts', 'scrapy')()
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 162, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 190, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 194, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py", line 51, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: ss'
最佳答案
您的 spiders
文件夹中的 __init__.py
文件似乎发生了一些问题。
尝试自己添加(留空):
───spiders
__init__.py
SSSpider.py
SSSpider.pyc
关于找不到 Python scrapy spider KeyError,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42327220/
我正在尝试在 Windows 上运行的小于 1GB 的 VM 上设置 YouTrack 和 TeamCity。使用率将非常低(用户和请求)。这是一个 POC 环境,如果它有效,我可能会将它推送到一个超
所以我在尝试使用 FORFILES 解决这个问题时遇到了麻烦。我正在尝试获取不超过 4 天的文件。所以基本上少于 4 天。然而,这似乎不太可能,因为/d -4 获取所有 4 天或更早的项目。 以下是我
如何从下面的 events 表中选择小于 15 分钟前创建的 events? CREATE TABLE events ( created_at timestamp NOT NULL DEFAU
Google Analytics Realtime提供 rt:minutesAgo ,可以过滤查询。 然而,它是一个维度而不是一个度量标准,<=不能在过滤器中使用。 假设我想在最后 n 分钟内获得一些
iOS 核心数据 - 严重的应用程序错误 - 尝试插入 nil 你好, 我的应用程序实际上运行稳定,但在极少数情况下它会崩溃并显示此错误消息... 2019-04-02 20:48:52.437172
我想制作一个 html div 以快速向右移动(例如不到 1 秒)并消失。然后1秒后再次直接出现在这个过程最开始div的位置。此过程将由单击按钮并重复 10 次触发。 我试图在 CSS 中使用过渡属性
我发现使用 TimeTrigger 是 Windows 10 (UWP) 上计划后台任务的方式。但是看起来我们需要给出的最小数字是 15 分钟。只是想知道,即使我们安排它在接下来的 1 分钟内运行,警
我必须在 1 秒内在屏幕上打印 2^20 行整数 printf 不够快,还有其他易于使用的快速输出替代方法吗? 每一行只包含 1 个整数。 我要求它用于竞争性编程问题,我必须将其源代码提交给法官。 最
我是一名优秀的程序员,十分优秀!