gpt4 book ai didi

python - Scrapy:如何将 url_id 与爬取的数据一起存储

转载 作者:行者123 更新时间:2023-11-30 21:57:03 25 4
gpt4 key购买 nike

from scrapy import Spider, Request
from selenium import webdriver

class MySpider(Spider):
name = "my_spider"

def __init__(self):
self.browser = webdriver.Chrome(executable_path='E:/chromedriver')
self.browser.set_page_load_timeout(100)


def closed(self,spider):
print("spider closed")
self.browser.close()

def start_requests(self):
start_urls = []
with open("target_urls.txt", 'r', encoding='utf-8') as f:
for line in f:
url_id, url = line.split('\t\t')
start_urls.append(url)

for url in start_urls:
yield Request(url=url, callback=self.parse)

def parse(self, response):
yield {
'target_url': response.url,
'comments': response.xpath('//div[@class="comments"]//em//text()').extract()
}

上面是我的scrapy代码。我使用 scrapycrapy my_spider -o comments.json 来运行爬虫。

您可能会注意到,对于我的每个url,都有一个与其关联的唯一url_id。如何将每个抓取结果与url_id进行匹配。理想情况下,我希望将 url_id 存储在 comments.json 的产量输出结果中。

非常感谢!

最佳答案

例如,尝试传入 meta 参数。我对您的代码做了一些更新:

def start_requests(self):
with open("target_urls.txt", 'r', encoding='utf-8') as f:
for line in f:
url_id, url = line.split('\t\t')
yield Request(url, self.parse, meta={'url_id': url_id, 'original_url': url})

def parse(self, response):
yield {
'target_url': response.meta['original_url'],
'url_id': response.meta['url_id'],
'comments': response.xpath('//div[@class="comments"]//em//text()').extract()
}

关于python - Scrapy:如何将 url_id 与爬取的数据一起存储,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55373779/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com