gpt4 book ai didi

python - 无法多次抓取来自不同搜索的不同结果而不是单个结果

转载 作者:行者123 更新时间:2023-12-02 09:48:33 25 4
gpt4 key购买 nike

我编写了一个脚本来解析在填写网页中的两个输入框时填充的链接名称(名字姓氏 )取自 csv 文件。该 csv 文件包含数千个名称,我正在尝试使用这些名称来抓取链接名称

问题是蜘蛛总是会抓取姓氏的链接名称

如何抓取与每次搜索关联的单个链接名称

这是按时间顺序排列的几个名字和姓氏供您考虑[取自 csv 文件]:

ANTONIO AMADOR      ACOSTA 
JOHN ROBERT ADAIR
ROBERT CURTIS ADAMEK
CY RITCHIE ADAMS

我尝试过这样的:

import csv
import scrapy
from scrapy.crawler import CrawlerProcess

class AmsrvsSpider(scrapy.Spider):
name = "amsrvsSpiderscript"
lead_url = "https://amsrvs.registry.faa.gov/airmeninquiry/Main.aspx"

def start_requests(self):
with open("document.csv","r") as f:
reader = csv.DictReader(f)
itemlist = [item for item in reader]

for item in itemlist:
yield scrapy.Request(self.lead_url,meta={"fname":item['FIRST NAME'],"lname":item['LAST NAME']},dont_filter=True, callback=self.parse)

def parse(self,response):
fname = response.meta.get("fname")
lname = response.meta.get("lname")
payload = {item.css('::attr(name)').get(default=''):item.css('::attr(value)').get(default='') for item in response.css("input[name]")}
payload['ctl00$content$ctl01$txtbxFirstName'] = fname
payload['ctl00$content$ctl01$txtbxLastName'] = lname
payload.pop('ctl00$content$ctl01$btnClear')
yield scrapy.FormRequest(self.lead_url,formdata=payload,dont_filter=True,callback=self.parse_content)

def parse_content(self,response):
name = response.css("a[id$='lnkbtnAirmenName']::text").get()
print(name)


if __name__ == "__main__":
c = CrawlerProcess({
'USER_AGENT': 'Mozilla/5.0',
'DOWNLOAD_TIMEOUT' : 5,
'LOG_LEVEL':'ERROR'
})
c.crawl(AmsrvsSpider)
c.start()

该网站的结果如下:

enter image description here

当前输出:

CY RITCHIE  ADAMS 
CY RITCHIE ADAMS
CY RITCHIE ADAMS
CY RITCHIE ADAMS

预期输出:

ANTONIO AMADOR  ACOSTA 
JOHN ROBERT ADAIR
ROBERT CURTIS ADAMEK
CY RITCHIE ADAMS

最佳答案

这就是我成功的方法。事实证明,cookie 在这里发挥着至关重要的作用。因此,有必要以正确的方式处理cookie以获得所需的输出。我还在 settings.py 中使用了这个 COOKIES_ENABLED = True

工作脚本:

def get_fields():
with open("brian.csv","r") as f:
reader = csv.DictReader(f)
itemlist = [item for item in reader]
return itemlist

class AmsrvsSpider(scrapy.Spider):
name = "amsrvs"
lead_url = 'https://amsrvs.registry.faa.gov/airmeninquiry/Main.aspx'
start_urls = ['https://amsrvs.registry.faa.gov/airmeninquiry/Main.aspx']

def parse(self,response):
payload = {item.css('::attr(name)').get(default=''):item.css('::attr(value)').get(default='') for item in response.css("input[name]")}
payload.pop('ctl00$content$ctl01$btnClear')

for i,item in enumerate(get_fields()):
payload['ctl00$content$ctl01$txtbxFirstName'] = item['FIRST NAME']
payload['ctl00$content$ctl01$txtbxLastName'] = item['LAST NAME']
yield scrapy.FormRequest(self.lead_url,formdata=payload,meta={'cookiejar': i},dont_filter=True,callback=self.parse_result)

def parse_result(self,response):
item_content = response.css("[id$='lnkbtnAirmenName']::text").get()
print(item_content)

关于python - 无法多次抓取来自不同搜索的不同结果而不是单个结果,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59757043/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com