- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
我正在尝试运行蜘蛛但有这个日志:
2015-05-15 12:44:43+0100 [scrapy] INFO: Scrapy 0.24.5 started (bot: reviews)
2015-05-15 12:44:43+0100 [scrapy] INFO: Optional features available: ssl, http11
2015-05-15 12:44:43+0100 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'reviews.spiders', 'SPIDER_MODULES': ['reviews.spiders'], 'DOWNLOAD_DELAY': 2, 'BOT_NAME': 'reviews'}
2015-05-15 12:44:43+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-05-15 12:44:43+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-05-15 12:44:43+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-05-15 12:44:43+0100 [scrapy] INFO: Enabled item pipelines:
2015-05-15 12:44:43+0100 [theverge] INFO: Spider opened
2015-05-15 12:44:43+0100 [theverge] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-05-15 12:44:43+0100 [scrapy] ERROR: Error caught on signal handler: <bound method ?.start_listening of <scrapy.telnet.TelnetConsole instance at 0x105127b48>>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py", line 1107, in _inlineCallbacks
result = g.send(result)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/core/engine.py", line 77, in start
yield self.signals.send_catch_log_deferred(signal=signals.engine_started)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/signalmanager.py", line 23, in send_catch_log_deferred
return signal.send_catch_log_deferred(*a, **kw)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/utils/signal.py", line 53, in send_catch_log_deferred
*arguments, **named)
--- <exception caught here> ---
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py", line 140, in maybeDeferred
result = f(*args, **kw)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/xlib/pydispatch/robustapply.py", line 54, in robustApply
return receiver(*arguments, **named)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/telnet.py", line 47, in start_listening
self.port = listen_tcp(self.portrange, self.host, self)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/utils/reactor.py", line 14, in listen_tcp
return reactor.listenTCP(x, factory, interface=host)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 495, in listenTCP
p.startListening()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/tcp.py", line 984, in startListening
raise CannotListenError(self.interface, self.port, le)
twisted.internet.error.CannotListenError: Couldn't listen on 127.0.0.1:6073: [Errno 48] Address already in use.
第一个错误开始出现在所有蜘蛛中,但其他蜘蛛仍然工作。 The:“[Errno 48] 地址已在使用中。”然后是:
2015-05-15 12:44:43+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6198
2015-05-15 12:44:44+0100 [theverge] DEBUG: Crawled (403) <GET http://www.theverge.com/reviews> (referer: None)
2015-05-15 12:44:44+0100 [theverge] DEBUG: Ignoring response <403 http://www.theverge.com/reviews>: HTTP status code is not handled or not allowed
2015-05-15 12:44:44+0100 [theverge] INFO: Closing spider (finished)
2015-05-15 12:44:44+0100 [theverge] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 191,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 265,
'downloader/response_count': 1,
'downloader/response_status_count/403': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 5, 15, 11, 44, 44, 136026),
'log_count/DEBUG': 3,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2015, 5, 15, 11, 44, 43, 829689)}
2015-05-15 12:44:44+0100 [theverge] INFO: Spider closed (finished)
2015-05-15 12:44:44+0100 [scrapy] ERROR: Error caught on signal handler: <bound method ?.stop_listening of <scrapy.telnet.TelnetConsole instance at 0x105127b48>>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py", line 1107, in _inlineCallbacks
result = g.send(result)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/core/engine.py", line 300, in _finish_stopping_engine
yield self.signals.send_catch_log_deferred(signal=signals.engine_stopped)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/signalmanager.py", line 23, in send_catch_log_deferred
return signal.send_catch_log_deferred(*a, **kw)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/utils/signal.py", line 53, in send_catch_log_deferred
*arguments, **named)
--- <exception caught here> ---
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/defer.py", line 140, in maybeDeferred
result = f(*args, **kw)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/xlib/pydispatch/robustapply.py", line 54, in robustApply
return receiver(*arguments, **named)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scrapy/telnet.py", line 53, in stop_listening
self.port.stopListening()
exceptions.AttributeError: TelnetConsole instance has no attribute 'port'
错误“exceptions.AttributeError: TelnetConsole instance has no attribute 'port'”对我来说是新的...不知道发生了什么,因为我所有其他网站的蜘蛛都运行良好。
谁能告诉我如何解决?
编辑:
重新启动后,此错误消失了。但是仍然不能用这个蜘蛛爬行......现在这里是日志:
2015-05-15 15:46:55+0100 [scrapy] INFO: Scrapy 0.24.5 started (bot: reviews)piders_toshub/reviews (spiderDev) $ scrapy crawl theverge
2015-05-15 15:46:55+0100 [scrapy] INFO: Optional features available: ssl, http11
2015-05-15 15:46:55+0100 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'reviews.spiders', 'SPIDER_MODULES': ['reviews.spiders'], 'DOWNLOAD_DELAY': 2, 'BOT_NAME': 'reviews'}
2015-05-15 15:46:55+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-05-15 15:46:55+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-05-15 15:46:55+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-05-15 15:46:55+0100 [scrapy] INFO: Enabled item pipelines:
2015-05-15 15:46:55+0100 [theverge] INFO: Spider opened
2015-05-15 15:46:55+0100 [theverge] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-05-15 15:46:55+0100 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-05-15 15:46:55+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-05-15 15:46:56+0100 [theverge] DEBUG: Crawled (403) <GET http://www.theverge.com/reviews> (referer: None)
2015-05-15 15:46:56+0100 [theverge] DEBUG: Ignoring response <403 http://www.theverge.com/reviews>: HTTP status code is not handled or not allowed
2015-05-15 15:46:56+0100 [theverge] INFO: Closing spider (finished)
2015-05-15 15:46:56+0100 [theverge] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 191,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 265,
'downloader/response_count': 1,
'downloader/response_status_count/403': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 5, 15, 14, 46, 56, 8769),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2015, 5, 15, 14, 46, 55, 673723)}
2015-05-15 15:46:56+0100 [theverge] INFO: Spider closed (finished)
这个“2015-05-15 15:46:56+0100 [theverge] DEBUG: Ignoring response <403 http://www.theverge.com/reviews >: HTTP status code is not handled or not allowed”很奇怪,因为我使用的是 download_delay = 2 和上周我可以毫无问题地抓取该网站...会发生什么?
最佳答案
Address already in use
会暗示其他东西正在监听那个端口,很可能你正在并行运行另一个蜘蛛?第二个错误只是第一个错误的结果,因为它没有正确实例化端口,现在找不到它来关闭它。
我会建议重新启动以确保没有端口仍在使用,并且只运行一个 spider 以查看它是否正常工作。如果再次发生这种情况,您可以使用 netstat
或类似工具调查哪个应用程序正在使用该端口。
更新:HTTP 错误 403 Forbidden 很可能意味着您因发出过多请求而被网站禁止。要解决这个问题,请使用代理服务器。结帐Scrapy HttpProxyMiddleware .
关于python - Scrapy 错误 - HTTP 状态代码未处理或不允许,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30258804/
在一个 scrapy 项目中,人们经常使用中间件。在交互式 session 期间是否也有一种通用方法可以在 scrapy shell 中启用中间件? 最佳答案 尽管如此,在 setting.py 中设
我想对网页中向下滚动生成的内容进行反向工程。问题出在url https://www.crowdfunder.com/user/following_page/80159?user_id=80159&li
我需要帮助将相对URL转换为Scrapy Spider中的绝对URL。 我需要将起始页面上的链接转换为绝对URL,以获取起始页面上已草稿的项目的图像。我尝试使用不同的方法来实现此目标失败,但是我陷入了
我在 Scrapy Python 中制作了一个脚本,它在几个月内一直运行良好(没有更改)。最近,当我在 Windows Powershell 中执行脚本时,它引发了下一个错误: scrapy craw
我已经从 docker 启动了 splash。我为 splash 和 scrapy 创建了大的 lua 脚本,然后它运行我看到了问题: Lua error: error in __gc metamet
我正在使用scrapy 来抓取网站,但发生了不好的事情(断电等)。 我想知道我怎样才能从它坏了的地方继续爬行。我不想从种子开始。 最佳答案 这可以通过将预定的请求持久化到磁盘来完成。 scrapy c
有人可以向我解释一下 Scrapy 中的暂停/恢复功能是如何实现的吗?作品? scrapy的版本我正在使用的是 0.24.5 documentation没有提供太多细节。 我有以下简单的蜘蛛: cla
我想将 apscheduler 与 scrapy.but 我的代码是错误的。 我应该如何修改它? settings = get_project_settings() configure_logging
我正在抓取一个网站并解析一些内容和图像,但即使对于 100 页左右的简单网站,完成这项工作也需要数小时。我正在使用以下设置。任何帮助将不胜感激。我已经看过这个问题- Scrapy 's Scrapyd
我正在抓取一个网站并解析一些内容和图像,但即使对于 100 页左右的简单网站,完成这项工作也需要数小时。我正在使用以下设置。任何帮助将不胜感激。我已经看过这个问题- Scrapy 's Scrapyd
我是爬行新手,想知道是否可以使用 Scrapy 逐步爬行网站,例如 CNBC.com?例如,如果今天我从一个站点抓取所有页面,那么从明天开始我只想收集新发布到该站点的页面,以避免抓取所有旧页面。 感谢
我是scrapy的新手。我正在尝试从 here 下载图像.我在关注 Official-Doc和 this article . 我的 settings.py 看起来像: BOT_NAME = 'shop
我在使用 scrapy 时遇到了一些问题。它没有返回任何结果。我试图将以下蜘蛛复制并粘贴到 scrapy shell 中,它确实有效。真的不确定问题出在哪里,但是当我用“scrapy crawl rx
如何使用 Scrapy 抓取多个 URL? 我是否被迫制作多个爬虫? class TravelSpider(BaseSpider): name = "speedy" allowed_d
当我使用splash渲染整个目标页面来爬取整个网站时出现问题。某些页面不是随机成功的,所以我错误地获取了支持渲染工作完成后出现的信息。这意味着我尽管我可以从其他渲染结果中获取全部信息,但仅从渲染结果中
如何使用 Scrapy 抓取多个 URL? 我是否被迫制作多个爬虫? class TravelSpider(BaseSpider): name = "speedy" allowed_d
我的scrapy程序无论如何只使用一个CPU内核CONCURRENT_REQUESTS我做。 scrapy中的某些方法是否可以在一个scrapy爬虫中使用所有cpu核心? ps:好像有争论max_pr
我最近用 python 和 Selenium 做了一个网络爬虫,我发现它做起来非常简单。该页面使用 ajax 调用来加载数据,最初我等待固定的 time_out 来加载页面。这工作了一段时间。之后,我
我想用这个命令运行 scrapy 服务器: scrapy server 它失败了,因为没有项目。然后我创建一个空项目来运行服务器,并成功部署另一个项目。但是,scrapy 服务器无法处理这个项目,并告
我正在创建一个网络应用程序,用于从不同网站抓取一长串鞋子。这是我的两个单独的 scrapy 脚本: http://store.nike.com/us/en_us/pw/mens-clearance-s
我是一名优秀的程序员,十分优秀!