gpt4 book ai didi

python - Scrapy - 如何根据抓取项目中的链接抓取新页面

转载 作者:太空狗 更新时间:2023-10-30 01:15:09 28 4
gpt4 key购买 nike

我是 Scrapy 的新手,我正在尝试从已抓取项目的链接中抓取新页面。具体来说,我想从 google 搜索结果中抓取 dropbox 上的一些文件共享链接,并将这些链接存储在一个 JSON 文件中。得到这些链接后,我想为每个链接打开一个新的页面来验证链接是否有效。如果有效,我也想将文件名存储到 JSON 文件中。

我使用具有“链接”、“文件名”、“状态”、“err_msg”属性的 DropboxItem 来存储每个抓取的项目,并尝试在解析函数中为每个抓取的链接发起异步请求。但似乎从未调用过 parse_file_page 函数。有谁知道如何实现这样的两步爬取?

    class DropboxSpider(Spider):
name = "dropbox"
allowed_domains = ["google.com"]
start_urls = [
"https://www.google.com/#filter=0&q=site:www.dropbox.com/s/&start=0"
]

def parse(self, response):
sel = Selector(response)
sites = sel.xpath("//h3[@class='r']")
items = []
for site in sites:
item = DropboxItem()
link = site.xpath('a/@href').extract()
item['link'] = link
link = ''.join(link)
#I want to parse a new page with url=link here
new_request = Request(link, callback=self.parse_file_page)
new_request.meta['item'] = item
items.append(item)
return items

def parse_file_page(self, response):
#item passed from request
item = response.meta['item']
#selector
sel = Selector(response)
content_area = sel.xpath("//div[@id='shmodel-content-area']")
filename_area = content_area.xpath("div[@class='filename shmodel-filename']")
if filename_area:
filename = filename_area.xpath("span[@id]/text()").extract()
if filename:
item['filename'] = filename
item['status'] = "normal"
else:
err_area = content_area.xpath("div[@class='err']")
if err_area:
err_msg = err_area.xpath("h3/text()").extract()
item['err_msg'] = err_msg
item['status'] = "error"
return item

感谢@ScrapyNovice 的回答。我修改了代码。现在看起来像

def parse(self, response):
sel = Selector(response)
sites = sel.xpath("//h3[@class='r']")
#items = []
for site in sites:
item = DropboxItem()
link = site.xpath('a/@href').extract()
item['link'] = link
link = ''.join(link)
print 'link!!!!!!=', link
new_request = Request(link, callback=self.parse_file_page)
new_request.meta['item'] = item
yield new_request
#items.append(item)
yield item
return
#return item #Note, when I simply return item here, got an error msg "SyntaxError: 'return' with argument inside generator"

def parse_file_page(self, response):
#item passed from request
print 'parse_file_page!!!'
item = response.meta['item']
#selector
sel = Selector(response)
content_area = sel.xpath("//div[@id='shmodel-content-area']")
filename_area = content_area.xpath("div[@class='filename shmodel-filename']")
if filename_area:
filename = filename_area.xpath("span[@id]/text()").extract()
if filename:
item['filename'] = filename
item['status'] = "normal"
item['err_msg'] = "none"
print 'filename=', filename
else:
err_area = content_area.xpath("div[@class='err']")
if err_area:
err_msg = err_area.xpath("h3/text()").extract()
item['filename'] = "null"
item['err_msg'] = err_msg
item['status'] = "error"
print 'err_msg', err_msg
else:
item['filename'] = "null"
item['err_msg'] = "unknown_err"
item['status'] = "error"
print 'unknown err'
return item

控制流程实际上变得很奇怪。当我使用“scrapy crawl dropbox -o items_dropbox.json -t json”来抓取本地文件(google 搜索结果的下载页面)时,我可以看到类似

的输出
2014-05-31 08:40:35-0400 [scrapy] INFO: Scrapy 0.22.2 started (bot: tutorial)
2014-05-31 08:40:35-0400 [scrapy] INFO: Optional features available: ssl, http11
2014-05-31 08:40:35-0400 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'FEED_FORMAT': 'json', 'SPIDER_MODULES': ['tutorial.spiders'], 'FEED_URI': 'items_dropbox.json', 'BOT_NAME': 'tutorial'}
2014-05-31 08:40:35-0400 [scrapy] INFO: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-05-31 08:40:35-0400 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-05-31 08:40:35-0400 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-31 08:40:35-0400 [scrapy] INFO: Enabled item pipelines:
2014-05-31 08:40:35-0400 [dropbox] INFO: Spider opened
2014-05-31 08:40:35-0400 [dropbox] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-05-31 08:40:35-0400 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-05-31 08:40:35-0400 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-05-31 08:40:35-0400 [dropbox] DEBUG: Crawled (200) <GET file:///home/xin/Downloads/dropbox_s/dropbox_s_1-Google.html> (referer: None)
link!!!!!!= http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0
link!!!!!!= https://www.dropbox.com/s/
2014-05-31 08:40:35-0400 [dropbox] DEBUG: Filtered offsite request to 'www.dropbox.com': <GET https://www.dropbox.com/s/>
link!!!!!!= https://www.dropbox.com/s/awg9oeyychug66w
link!!!!!!= http://www.dropbox.com/s/kfmoyq9y4vrz8fm
link!!!!!!= https://www.dropbox.com/s/pvsp4uz6gejjhel
.... many links here
link!!!!!!= https://www.dropbox.com/s/gavgg48733m3918/MailCheck.xlsx
link!!!!!!= http://www.dropbox.com/s/9x8924gtb52ksn6/Phonesky.apk
2014-05-31 08:40:35-0400 [dropbox] DEBUG: Scraped from <200 file:///home/xin/Downloads/dropbox_s/dropbox_s_1-Google.html>
{'link': [u'http://www.dropbox.com/s/9x8924gtb52ksn6/Phonesky.apk']}
2014-05-31 08:40:35-0400 [dropbox] DEBUG: Crawled (200) <GET http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0> (referer: file:///home/xin/Downloads/dropbox_s/dropbox_s_1-Google.html)
parse_file_page!!!
unknown err
2014-05-31 08:40:35-0400 [dropbox] DEBUG: Scraped from <200 http://www.google.com/intl/en/webmasters/>
{'err_msg': 'unknown_err',
'filename': 'null',
'link': [u'http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0'],
'status': 'error'}
2014-05-31 08:40:35-0400 [dropbox] INFO: Closing spider (finished)
2014-05-31 08:40:35-0400 [dropbox] INFO: Stored json feed (2 items) in: items_dropbox.json
2014-05-31 08:40:35-0400 [dropbox] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 558,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 449979,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 5, 31, 12, 40, 35, 348058),
'item_scraped_count': 2,
'log_count/DEBUG': 7,
'log_count/INFO': 8,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2014, 5, 31, 12, 40, 35, 249309)}
2014-05-31 08:40:35-0400 [dropbox] INFO: Spider closed (finished)

现在json文件只有:

[{"link": ["http://www.dropbox.com/s/9x8924gtb52ksn6/Phonesky.apk"]},
{"status": "error", "err_msg": "unknown_err", "link": ["http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0"], "filename": "null"}]

最佳答案

您正在创建一个请求并很好地设置回调,但您从未用它做任何事情。

        for site in sites:
item = DropboxItem()
link = site.xpath('a/@href').extract()
item['link'] = link
link = ''.join(link)
#I want to parse a new page with url=link here
new_request = Request(link, callback=self.parse_file_page)
new_request.meta['item'] = item
yield new_request
# Don't do this here because you're adding your Item twice.
#items.append(item)

在更多的设计层面上,您将所有抓取的项目存储在 items 中在 parse() 的末尾,但管道通常期望接收单个项目,而不是它们的数组。去掉 items数组,您将能够使用 JSON Feed Export Scrapy 内置以 JSON 格式存储结果。

更新:

当您尝试返回时收到错误消息的原因是因为使用了 yield在一个函数中将它变成一个生成器。这允许您重复调用该函数。每次它达到 yield 时,它都会返回您产生的值,但会记住它的状态和正在执行的行。下次调用生成器时,它会从上次中断的地方继续执行。如果它无法产生,它会引发 StopIteration异常(exception)。在 Python 2 中,不允许混合使用 yieldreturn在同一个函数中。

您不想从 parse() 产生任何项,因为他们仍然缺少他们的 filename , status等等。

您在 parse() 中提出的请求在dropbox.com , 正确的?请求没有通过,因为 dropbox 不在蜘蛛的 allowed_domains 中. (因此日志消息:DEBUG: Filtered offsite request to 'www.dropbox.com': <GET https://www.dropbox.com/s/>)

实际有效且未过滤的请求导致 http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0 ,这是 Google 的页面之一,而不是 DropBox 的。您可能想使用 urlparse在您的 parse() 中发出请求之前检查链接的域方法。

至于你的结果:第一个 JSON 对象

{"link": ["http://www.dropbox.com/s/9x8924gtb52ksn6/Phonesky.apk"]}

来自您调用的地方 yield item在你的parse()方法。只有一个,因为您的 yield 不在任何类型的循环中,所以当生成器恢复执行时,它运行下一行:return ,它退出生成器。您会注意到此项目缺少您在 parse_file_page() 中填写的所有字段方法。这就是为什么您不想在 parse() 中产生任何项目的原因方法。

你的第二个 JSON 对象

{
"status": "error",
"err_msg": "unknown_err",
"link": ["http://www.google.com/intl/en/webmasters/#utm_source=en-wmxmsg&utm_medium=wmxmsg&utm_campaign=bm&authuser=0"],
"filename": "null"
}

是尝试解析 Google 的一个页面的结果,就好像它是您一直期待的 DropBox 页面一样。您在 parse() 中产生了多个请求方法,除其中一个外,所有方法都指向 dropbox.com .所有 DropBox 链接都将被删除,因为它们不在您的 allowed_domains 中,因此您得到的唯一响应是页面上的另一个链接,该链接匹配您的 xpath 选择器并且来自您的 allowed_sites 中的网站之一。 . (这是谷歌网站管理员链接)这就是为什么你只看到 parse_file_page!!!在你的输出中一次。

我建议学习更多关于生成器的知识,因为它们是使用 Scrapy 的基础部分。 second Google result for "python generator tutorial" looks like a very good place to start .

关于python - Scrapy - 如何根据抓取项目中的链接抓取新页面,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23881872/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com