gpt4 book ai didi

python - 爬虫将数据更新到数组,在循环内yield

转载 作者:行者123 更新时间:2023-12-01 08:00:30 25 4
gpt4 key购买 nike

我想使用循环连续抓取和更新数组值,因为我需要单击某个按钮来获取数组上的下一个值。然而,循环内的yield似乎作为并行线程工作,并且该项目多次yield。我想要的是遍历循环,更新数据并仅生成一次项目。例子:当前输出:

{'field1': 'data1',
'filed2' : 'data2',
'field3' : ['data31']}

{'field1': 'data1',
'filed2' : 'data2',
'field3' : ['data32']}

{'field1': 'data1',
'filed2' : 'data2',
'field3' : ['data33']}

预期:

{'field1': 'data1',
'filed2' : 'data2',
'field3' : ['data31', 'data32', 'data3']}

这是我的代码

def parse_individual_listings(self, response):
...
data = {}
data[field1] = 'data1'
data[field1] = 'data2'
...
for i in range(3):
yield scrapy.Request(
urlparse.urljoin(response.url, link['href']), #different link
callback=self.parse_individual_tabs,
meta={'data': data, 'n':i};
)
def parse_individual_tabs(self, response):
data = response.meta['data']
i = response.meta['i']
...
# keep populating `data`
data[field3][i] = "data3[i]" #this value change when I click a button to update
yield data

最佳答案

尝试使用inline_requests库(https://pypi.org/project/scrapy-inline-requests/)。它允许您在同一函数内发出请求。将数据收集到一个对象而不是产生不同的数据是很有用的。使用一些伪代码检查此示例:

from inline_requests import inline_requests
from scrapy import Selector

@inline_requests
def parse_individual_listings(self, response):
...
data = {}
data[field1] = 'data1'
data[field1] = 'data2'
...
for i in range(3):
extra_req = yield scrapy.Request(
response.urljoin(link['href']), #different link
)
# apply your logics here, say extract some data
sel = Selector(text=extra_req.text)
data['field3'].append(sel.css('some css selector').get())
yield data

关于python - 爬虫将数据更新到数组,在循环内yield,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55758718/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com