- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
我写了一个爬虫,它由于某种原因出现了问题。
我是新手,但是,从日志来看,它似乎成功加载了页面?我已经在浏览器中测试了我的 XPath 选择器,它们工作正常。我查看了 craigslist.org/robots.txt
文件,它没有明确禁止我正在做的事情。
有人知道这是怎么回事吗?
它可能与用户代理字符串有关吗?是否向蜘蛛提供了不同版本的页面?
蜘蛛
import scrapy
class RentalsCrawler(scrapy.Spider):
name = "rentals"
allowed_domains = [
'craigslist.org'
]
custom_settings = {
'DOWNLOAD_DELAY': 2,
'CONCURRENT_REQUESTS_PER_DOMAIN': 2,
}
handle_httpstatus_list = [404]
def start_requests(self):
start = 0
nopgs = 1
pages = []
for i in range(0, nopgs):
i = i * 120 + start
pages.append('https://vancouver.craigslist.ca/search/apa?s=' + str(i))
for page in pages:
yield scrapy.Request(url=page, callback=self.parse)
def parse(self, response):
prc_path = '//span[@class="result-meta"]/span[@class="result-price"]/text()'
sqf_path = '//span[@class="result-meta"]/span[@class="housing"]/text()'
loc_path = '//span[@class="result-meta"]/span[@class="result-hood"]/text()'
prc_resp = response.xpath(prc_path).extract_first()
sqf_resp = response.xpath(sqf_path).extract_first()
loc_resp = response.xpath(loc_path).extract_first()
objct = { 'prc': prc_resp }
if sqf_resp:
objct['sqf'] = sqf_resp
if loc_resp:
objct['loc'] = loc_resp
yield objct
日志
(base) C:\Users\Anthony\tutorial\tutorial\spiders>scrapy runspider rentals.py -o rentals.json
0
2018-06-07 15:58:23 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: tutorial)
2018-06-07 15:58:23 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 17.5.0, Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.0.2o 27 Mar 2018), cryptography 2.2.2, Platform Windows-10-10.0.17134-SP0
2018-06-07 15:58:23 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'tutorial', 'FEED_FORMAT': 'json', 'FEED_URI': 'rentals.json', 'NEWSPIDER_MODULE': 'tutorial.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_LOADER_WARN_ONLY': True, 'SPIDER_MODULES': ['tutorial.spiders']}
2018-06-07 15:58:23 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2018-06-07 15:58:23 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-06-07 15:58:23 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-06-07 15:58:23 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-06-07 15:58:23 [scrapy.core.engine] INFO: Spider opened
2018-06-07 15:58:23 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-06-07 15:58:23 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
0
2018-06-07 15:58:23 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://vancouver.craigslist.ca/robots.txt> (referer: None)
2018-06-07 15:58:24 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://vancouver.craigslist.ca/search/apa?s=0> (referer: None)
2018-06-07 15:58:24 [scrapy.core.engine] INFO: Closing spider (finished)
2018-06-07 15:58:24 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 468,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 36594,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 6, 7, 22, 58, 24, 237666),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 6, 7, 22, 58, 23, 792075)}
2018-06-07 15:58:24 [scrapy.core.engine] INFO: Spider closed (finished)
输出
一个空的 json
文件。
Scrapy.cfg
# Automatically created by: scrapy startproject
#
# For more information about the [deploy] section see:
# https://scrapyd.readthedocs.io/en/latest/deploy.html
[settings]
default = tutorial.settings
[deploy]
#url = http://localhost:6800/
project = tutorial
settings.py
# -*- coding: utf-8 -*-
# Scrapy settings for tutorial project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'tutorial'
SPIDER_MODULES = ['tutorial.spiders']
NEWSPIDER_MODULE = 'tutorial.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tutorial (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'tutorial.middlewares.TutorialSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'tutorial.middlewares.TutorialDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'tutorial.pipelines.TutorialPipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
日志(带有 `yield objct)
(base) C:\Users\Anthony\tutorial\tutorial\spiders>scrapy runspider rentals.py -o rentals.json
2018-06-07 17:33:16 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: tutorial)
2018-06-07 17:33:16 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 17.5.0, Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.0.2o 27 Mar 2018), cryptography 2.2.2, Platform Windows-10-10.0.17134-SP0
2018-06-07 17:33:16 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'tutorial', 'FEED_FORMAT': 'json', 'FEED_URI': 'rentals.json', 'NEWSPIDER_MODULE': 'tutorial.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_LOADER_WARN_ONLY': True, 'SPIDER_MODULES': ['tutorial.spiders']}
2018-06-07 17:33:16 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2018-06-07 17:33:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-06-07 17:33:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-06-07 17:33:16 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-06-07 17:33:16 [scrapy.core.engine] INFO: Spider opened
2018-06-07 17:33:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-06-07 17:33:16 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-06-07 17:33:16 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://vancouver.craigslist.ca/robots.txt> (referer: None)
2018-06-07 17:33:16 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://vancouver.craigslist.ca/search/apa?s=0> (referer: None)
2018-06-07 17:33:17 [scrapy.core.scraper] DEBUG: Scraped from <200 https://vancouver.craigslist.ca/search/apa?s=0>
{'prc': '$2400', 'sqf': '\n 1br -\n 895ft', 'loc': ' (North Vancouver)'}
2018-06-07 17:33:17 [scrapy.core.engine] INFO: Closing spider (finished)
2018-06-07 17:33:17 [scrapy.extensions.feedexport] INFO: Stored json feed (1 items) in: rentals.json
2018-06-07 17:33:17 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 468,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 37724,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 6, 8, 0, 33, 17, 36724),
'item_scraped_count': 1,
'log_count/DEBUG': 4,
'log_count/INFO': 8,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 6, 8, 0, 33, 16, 533959)}
2018-06-07 17:33:17 [scrapy.core.engine] INFO: Spider closed (finished)
结论
我终于写了一些输出我所期望的代码。不幸的是,当我让它与 XPath 一起工作时,脚本会将所有价格集中在一个列表中,将建筑面积集中在另一个列表中,并将位置集中在另一个列表中。我更喜欢 XPath,而且我确信有一种方法可以保留 XPath,但在字典中将每个列表分开。
import scrapy
class RentalsCrawler(scrapy.Spider):
name = "rentals"
allowed_domains = [
'craigslist.org',
'kajiji.ca'
]
custom_settings = {
'DOWNLOAD_DELAY': 2,
'CONCURRENT_REQUESTS_PER_DOMAIN': 2,
}
handle_httpstatus_list = [404]
def start_requests(self):
start = 0
nopgs = 1
pages = []
for i in range(0, nopgs):
i = i * 120 + start
pages.append('https://vancouver.craigslist.ca/search/apa?s=' + str(i))
for page in pages:
yield scrapy.Request(url=page, callback=self.parse)
def parse(self, response):
for li in response.css('ul.rows li p span.result-meta'):
prc = li.css('span.result-price::text').extract_first()
sqf = li.css('span.housing::text').extract_first()
loc = li.css('result-hood::text').extract_first()
objct = { 'prc': prc }
if sqf:
objct['sqf'] = sqf
if loc:
objct['loc'] = loc
yield objct
最佳答案
您的代码示例是否完整?如果是,您可能只是在 parse
末尾遗漏了一行,即生成您要添加到当前 scrapy 作业中的项目。我忘记了你是否必须实际生成一个 scrapy Item
,但首先尝试 yield objct
即
def parse(self, response):
...
objct['key'] = response.xpath("/my/clever/xpath")
...
yield objct
关于python - 问题抓取 craigslist.org,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50751281/
我正在尝试检测用户访问网页时是否处于隐身模式。 如果您访问http://boston.craigslist.org/gbs/fud/5127255934.html在正常 session 中,单击回复按
我知道,我知道 - 它可能不(也不应该)重要 - 我读过 this comment .但作为一个刚刚学习 Python 的新手,我非常感兴趣。源代码似乎多次引用 Javascript - 整个站点都在
就目前而言,这个问题不适合我们的问答形式。我们希望答案得到事实、引用或专业知识的支持,但这个问题可能会引起辩论、争论、投票或扩展讨论。如果您觉得这个问题可以改进并可能重新打开,visit the he
我一直在尝试从 url 下载图像(更准确地说是 Craigslist) 首先,我使用了 SDWebImage 框架,但看不到图像。之后我尝试了一种简单的方法来下载网址: 使用此网址获取图片: http
在 .NET 中编写我自己的“craigslist”,试图找出如何创建一次性电子邮件别名(craigslist 类型)? Here's what I mean by "craigslist style
很难说出这里要问什么。这个问题模棱两可、含糊不清、不完整、过于宽泛或夸夸其谈,无法以目前的形式得到合理的回答。如需帮助澄清此问题以便重新打开,visit the help center . 关闭 1
以下程序旨在将传入的电子邮件别名与数据库中的别名进行匹配,并将电子邮件转发到正确的地址,就像 Craigslist 所做的那样。 我现在收到此错误: Error: [1] You must provi
我正在尝试从 craigslist 搜索中提取每个图像 url,但似乎无法深入到 url 本身。当我尝试 soup.find_all("a", { "class":"result-image gall
我写了一个爬虫,它由于某种原因出现了问题。 我是新手,但是,从日志来看,它似乎成功加载了页面?我已经在浏览器中测试了我的 XPath 选择器,它们工作正常。我查看了 craigslist.org/ro
如何为给定地址 (zip) 映射最近的 craigslist 站点?或者我正在考虑通过查看 craigslist 网站的城市/州列表来自己制作 map 。 最佳答案 事实上,我认为最好的解决方案是获取
我想在 Craigslist 上搜索某个地区的公寓,将租金、位置等关键数据存储在数据库中(可能是 sqlite——我还没有决定)。我是 Python 的新手,但发现使用 requests 和 Beau
关闭。这个问题不符合Stack Overflow guidelines .它目前不接受答案。 我们不允许提问寻求书籍、工具、软件库等的推荐。您可以编辑问题,以便用事实和引用来回答。 关闭 3 年前。
我想做的是允许用户使用 PHP curl 通过我自己的网站向 Craiglist 发帖。这不是一个自动发布系统,我只是希望用户能够同时发布到 Craigslist 和我的网站上。到目前为止,我已经成功
按照目前的情况,这个问题不适合我们的问答形式。我们希望答案得到事实、引用或专业知识的支持,但这个问题可能会引发辩论、争论、投票或扩展讨论。如果您觉得这个问题可以改进并可能重新打开,visit the
这是有问题的页面:http://phoenix.craigslist.org/cpg/ 我想做的是创建一个如下所示的数组: 日期(由该页面上的 h4 标签捕获)=> 在单元格 [0][0][0] 中,
按照目前的情况,这个问题不适合我们的问答形式。我们希望答案得到事实、引用或专业知识的支持,但这个问题可能会引发辩论、争论、投票或扩展讨论。如果您觉得这个问题可以改进并可能重新打开,visit the
已结束。此问题正在寻求书籍、工具、软件库等的推荐。它不满足Stack Overflow guidelines 。目前不接受答案。 我们不允许提出寻求书籍、工具、软件库等推荐的问题。您可以编辑问题,以便
我有兴趣设置我的应用程序,以便我可以提供特定于位置的内容。 类似于 craigslist,其中 miami.craigslist.org 仅显示迈阿密的帖子。 假设我有一个包含 City 字段的模型,
我正在尝试使用 Scrapy 抓取 Craigslist 分类广告以提取待售商品。 我能够提取日期、帖子标题和帖子 url,但无法提取价格。 出于某种原因,当前代码提取了所有的价格,但是当我在价格跨度
这个问题与以下问题非常相似: is it possible to making a posting to Craigslist through my own website? 我想做的是允许用户通过我
我是一名优秀的程序员,十分优秀!