- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
我正在学习如何使用 scrapy + splash。我已经在虚拟环境中创建了一个项目,我现在正在做这个教程:https://github.com/scrapy-plugins/scrapy-splash .
我跑过 splash :
$ docker run -p 8050:8050 scrapinghub/splash
结果是:
2017-01-12 09:18:50+0000 [-] Log opened.
2017-01-12 09:18:50.225754 [-] Splash version: 2.3
2017-01-12 09:18:50.227033 [-] Qt 5.5.1, PyQt 5.5.1, WebKit 538.1, sip 4.17, Twisted 16.1.1, Lua 5.2
2017-01-12 09:18:50.227201 [-] Python 3.4.3 (default, Nov 17 2016, 01:08:31) [GCC 4.8.4]
2017-01-12 09:18:50.227645 [-] Open files limit: 1048576
2017-01-12 09:18:50.227882 [-] Can't bump open files limit
2017-01-12 09:18:50.333978 [-] Xvfb is started: ['Xvfb', ':1', '-screen', '0', '1024x768x24']
2017-01-12 09:18:50.438528 [-] proxy profiles support is enabled, proxy profiles path: /etc/splash/proxy-profiles
2017-01-12 09:18:50.597573 [-] verbosity=1
2017-01-12 09:18:50.597747 [-] slots=50
2017-01-12 09:18:50.597820 [-] argument_cache_max_entries=500
2017-01-12 09:18:50.598696 [-] Web UI: enabled, Lua: enabled (sandbox: enabled)
2017-01-12 09:18:50.601924 [-] Site starting on 8050
2017-01-12 09:18:50.602119 [-] Starting factory <twisted.web.server.Site object at 0x7ff528490be0>
当我运行以下蜘蛛时:
import scrapy
from scrapy_splash import SplashRequest
class MySpider(scrapy.Spider):
name = 'spiderman'
domain = ['web']
start_urls = ['http://www.example.com']
def parse(self, response):
print(response.body)
一切正常; scrapy 返回正文 html。但是,当我像这样从教程中尝试 SplashRequest 时:
import scrapy
from scrapy_splash import SplashRequest
class MySpider(scrapy.Spider):
name = 'spiderman'
domain = ['web']
start_urls = ['http://www.example.com']
def start_requests(self):
for url in self.start_urls:
yield SplashRequest(url, self.parse,
args = {'wait':0.5},)
def parse(self, response):
response.body
我在终端中收到以下消息:
File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
defer.returnValue((yield download_func(request=request,spider=spider)))
twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 61: Connection refused.
2017-01-12 11:02:50 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:03:06 [scrapy.downloadermiddlewares.retry] DEBUG:
Retrying <GET http://192.168.59.103:8050/robots.txt> (failed 1 times): TCP connection timed out: 60: Operation timed out
我的猜测是飞溅会导致一些连接问题,但我不知道如何解决这些问题。我添加了:
USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64; rv:7.0.1) Gecko/20100101 Firefox/7.7'
DOWNLOAD_DELAY = 0.25
但这并没有帮助!
问:有人知道如何解决这个问题吗?
编辑:将 ROBOTSTXT_OBEY
更改为 False
不起作用。整个控制台日志:
$ scrapy crawl spiderman
2017-01-12 11:25:18 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: myScrapingProject)
2017-01-12 11:25:18 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'myScrapingProject', 'DOWNLOAD_DELAY': 0.25, 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage', 'NEWSPIDER_MODULE': 'myScrapingProject.spiders', 'SPIDER_MODULES': ['myScrapingProject.spiders'], 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64; rv:7.0.1) Gecko/20100101 Firefox/7.7'}
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy_splash.SplashCookiesMiddleware',
'scrapy_splash.SplashMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy_splash.SplashDeduplicateArgsMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-01-12 11:25:18 [scrapy.core.engine] INFO: Spider opened
2017-01-12 11:25:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:25:18 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-01-12 11:26:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:26:33 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 1 times): TCP connection timed out: 60: Operation timed out.
2017-01-12 11:27:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:27:48 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 2 times): TCP connection timed out: 60: Operation timed out.
2017-01-12 11:28:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:29:03 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 3 times): TCP connection timed out: 60: Operation timed out.
2017-01-12 11:29:03 [scrapy.core.scraper] ERROR: Error downloading <GET http://www.example.com via http://192.168.59.103:8050/render.html>
Traceback (most recent call last):
File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/twisted/internet/defer.py", line 1297, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
defer.returnValue((yield download_func(request=request,spider=spider)))
twisted.internet.error.TCPTimedOutError: TCP connection timed out: 60: Operation timed out.
2017-01-12 11:29:03 [scrapy.core.engine] INFO: Closing spider (finished)
2017-01-12 11:29:03 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 3,
'downloader/exception_type_count/twisted.internet.error.TCPTimedOutError': 3,
'downloader/request_bytes': 1746,
'downloader/request_count': 3,
'downloader/request_method_count/POST': 3,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 1, 12, 10, 29, 3, 935527),
'log_count/DEBUG': 4,
'log_count/ERROR': 1,
'log_count/INFO': 10,
'scheduler/dequeued': 4,
'scheduler/dequeued/memory': 4,
'scheduler/enqueued': 4,
'scheduler/enqueued/memory': 4,
'splash/render.html/request_count': 1,
'start_time': datetime.datetime(2017, 1, 12, 10, 25, 18, 451764)}
2017-01-12 11:29:03 [scrapy.core.engine] INFO: Spider closed (finished)
EDIT2:如果我在新的终端窗口中运行 curl http://localhost:8050/render.html?url=http%3A%2F%2Fwww.examp le.com%2F
,我在用于运行 splash 的终端窗口中得到以下输出:
process 1: D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open "/etc/machine-id": No such file or directory
See the manual page for dbus-uuidgen to correct this issue.
2017-01-12 10:48:03.341100 [events] {"path": "/render.html", "load": [0.07, 0.02, 0.0], "fds": 19, "client_ip": "172.17.0.1", "_id": 140690919672912, "method": "GET", "rendertime": 6.497595548629761, "active": 0, "qsize": 0, "maxrss": 83860, "args": {"uid": 140690919672912, "url": "http://www.examp\u200c\u200ble.com/"},
"timestamp": 1484218083, "status_code": 200, "user-agent": "curl/7.51.0"}
2017-01-12 10:48:03.343167 [-] "172.17.0.1" - - [12/Jan/2017:10:48:02 +0000] "GET /render.html?url=http%3A%2F%2Fwww.examp\xe2\x80\x8c\xe2\x80\x8ble.com%2F HTTP/1.1" 200 1262 "-" "curl/7.51.0"
最佳答案
问题是SPLASH_URL
必须指向本地运行的Splash实例,通常运行在http://localhost:8050
,
而不是 scrapy-splash README 中用作示例的值-- http://192.168.59.103:8050
出现在错误日志中:
Retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 1 times)
OP 测试了 curl http://localhost:8050/render.html?url=http%3A%2F%2Fwww.examp le.com%2F
,它起作用了,所以设置应该说:
SPLASH_URL = 'http://localhost:8050'
关于python - Scrapy + 飞溅 : connection refused,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41610403/
这个问题已经有答案了: AWS RDS How to set up a MySQL Database (1 个回答) 已关闭 6 年前。 我有一个AWS Elastic Beanstalk实例 Tom
我正在 Mean.js 版本 4.2 上运行 npm test,它在 Protractor e2e 测试中给出了“连接被拒绝”错误。我尝试更新 Selenium 像 this says to.现在它的
我购买了一台服务器(新加坡地区,512 MB RAM)我试图在该服务器中设置单节点集群。我从这个 link当我检查 $ sudo service Cassandra status它正在显示 nodet
我正在使用 spring cloud 来配置微服务。我使用 Jhipster 来生成应用程序。 我有三个应用程序 JHipster-Registry、Gateway 和 admin 应用程序 所有三个
我有 2 个 java 文件 Server.java 和 Client.java。两者都在不同的容器中。 DOCKER 文件:我用于服务器的 dockerfile(在名为“Server”的文件夹中)是
客户端应用程序将 json 数据发送到 localhost:8080 上的服务器,该数据打包并作为 Docker 镜像运行。使用 Postman chrome 应用程序手动发送 json 时,服务器工
我正在尝试关注 celery tutorial ,但是当我运行 python manage.py celeryd 时遇到了一个问题:我的 RabbitMQ 服务器(安装在我开发箱的虚拟机上)不允许我的
我清理了~/.ivy2/cache目录。 我的project/plugins.sbt文件: $ cat project/plugins.sbt // Comment to get more infor
ELK 与销售人员 URL:http://localhost:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Po
我正在另一个容器中使用 Dockerize spring boot 应用程序和 redis。 我使用 docker compose 在同一网络中运行两个容器,这是我的 docker-compose.y
我试图从我的应用程序访问 Rest API。当该 API 不在线时,我的应用程序发生了上述错误。我正在使用 Spring Boot 。我想知道,有什么方法可以在访问该网址之前检查该网址的可用性。 S
在我的 flutter 应用程序中,我使用 flask 服务器进行测试。我启动了我的服务器并在我的 flutter 应用程序中运行 API url。但是 SocketException: Connec
我正在 Mac 上的 IntelliJ 中设置远程调试器。我没有做任何修改就遵循了模板。然后我单击“调试 xxx”按钮。表明 "Error running 'Remote Debugger': Una
我有一个 linux box,jenkins 服务器在上面运行并从 git 中拉取脚本并开始执行 我有一台 Windows 电脑,我从它打开 jenkins url 说 xyz:8080 并尝试从我的
我在 tomcat 7.50 上有一个应用程序,在单个请求上工作正常,但在许多同时请求 (~1200) 上我得到: 2014-11-02 11:22:48,485 ERROR [MONITOR-AG
当我尝试重新配置我的 gitlab 实例时,出现了这个错误。sudo gitlab-ctl reconfigure 工作正常但是当我尝试启动 gitlab 时我看到 502 错误并且当我跟踪日志时我看
我在 3 个 n1-standard-4 GKE 实例上运行了大约 200 个 pod。流量水平较低,因此每台机器上都有大量备用 CPU 和 RAM。通常当服务尝试相互连接时,连接失败并显示“CONN
我在unbundu机器中使用JMeter设置了分布式负载测试环境。 ->主站:系统运行JMeter GUI,控制每个从站。 ->从站:运行jmeter-server的系统,从主站接收命令,并向被测服务
我需要帮助来解决被拒绝的状态。我查看了 named.conf,一切正常。 我什至把allow-query改成了any,原来是localhost。 dig xxx.com @ns1.xxx.com ;
我正在使用Laravel。当我dd($request->all())时,其中的数据涉及文件和其他一些数据。它返回错误 [2019-02-22 19:40:24] local.ERROR: stream
我是一名优秀的程序员,十分优秀!