gpt4 book ai didi

python - Scrapy:爬虫不爬行

转载 作者:行者123 更新时间:2023-12-01 08:23:36 25 4
gpt4 key购买 nike

我显然对 python、scrapy 和一般编程很陌生。我正在尝试抓取这个网站,但我的代码似乎不起作用。我发现的所有示例和教程都涉及简单明了的网站。或者也许我只是无法理解它。我需要抓取数百个结果,我真的不想手动完成。

所以在这个例子中,我只是尝试从 div 对象获取 href 来检查它是否有效。事实并非如此。

import scrapy
import requests


class QuotesSpider(scrapy.Spider):
name = "items"

def start_requests(self):
urls = [
'https://www.bosch-professional.com/ar/es/dl/localizador-de-distribuidores/dealerslist/',
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)

def parse(self, response):
list_doc = open('list_doc.txt', 'w')
for item in response.css('div.row.m-dealer_list__row'):
yield {
'text': item.css('a::attr(href)').extract(),

}

当在终端上运行时(忽略机器人),它返回:

2019-01-30 23:57:13 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2019-01-30 23:57:13 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-01-30 23:57:13 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-01-30 23:57:13 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-01-30 23:57:13 [scrapy.core.engine] INFO: Spider opened
2019-01-30 23:57:13 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-01-30 23:57:13 [scrapy.extensions.telnet] DEBUG: Telnet console listening on #NUMBER
2019-01-30 23:57:16 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.bosch-professional.com/ar/es/dl/localizador-de-distribuidores/dealerslist/> (referer: None)
2019-01-30 23:57:16 [scrapy.core.engine] INFO: Closing spider (finished)
2019-01-30 23:57:16 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 276,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 70592,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 1, 31, 2, 57, 16, 541215),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'memusage/max': 57974784,
'memusage/startup': 57974784,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2019, 1, 31, 2, 57, 13, 861593)}
2019-01-30 23:57:16 [scrapy.core.engine] INFO: Spider closed (finished)

感谢您提供的任何帮助。

最佳答案

据我所知,页面上确实没有这样的元素:

In [2]: fetch("https://www.bosch-professional.com/ar/es/dl/localizador-de-distribuidores/dealerslist/")
2019-01-31 09:31:47 [scrapy.core.engine] INFO: Spider opened
2019-01-31 09:31:48 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.bosch-professional.com/ar/es/dl/localizador-de-distribuidores/dealerslist/> (referer: None, latency: 0.87 s)

In [3]: response.css('div.row.m-dealer_list__row')
Out[3]: []

但是如果你愿意尝试:

In [4]: response.css('div.m-dealer_citylist__card a::text').extract()
Out[4]:
[u'25 DE MAYO - BS AS',
u'25 DE MAYO - LA PAMP',
u'25 DE MAYO',
u'9 DE ABRIL',
...
u'ZENON PEREYRA',
u'Z\xc1RATE']

关于python - Scrapy:爬虫不爬行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54452808/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com