gpt4 book ai didi

python - 让 Scrapy 跟随链接并收集数据

转载 作者:太空狗 更新时间:2023-10-29 21:22:06 26 4
gpt4 key购买 nike

我正在尝试在 Scrapy 中编写程序以打开链接并从此标签收集数据:<p class="attrgroup"></p> .

我已经设法让 Scrapy 收集来自给定 URL 的所有链接但不跟随它们。非常感谢任何帮助。

最佳答案

您需要产生 Request链接的实例,分配回调并在回调中提取所需的 p 元素的文本:

# -*- coding: utf-8 -*-
import scrapy


# item class included here
class DmozItem(scrapy.Item):
# define the fields for your item here like:
link = scrapy.Field()
attr = scrapy.Field()


class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["craigslist.org"]
start_urls = [
"http://chicago.craigslist.org/search/emd?"
]

BASE_URL = 'http://chicago.craigslist.org/'

def parse(self, response):
links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
for link in links:
absolute_url = self.BASE_URL + link
yield scrapy.Request(absolute_url, callback=self.parse_attr)

def parse_attr(self, response):
item = DmozItem()
item["link"] = response.url
item["attr"] = "".join(response.xpath("//p[@class='attrgroup']//text()").extract())
return item

关于python - 让 Scrapy 跟随链接并收集数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30152261/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com