gpt4 book ai didi

python - 如何使用 Scrapy 抓取新链接

转载 作者:太空宇宙 更新时间:2023-11-04 07:52:38 25 4
gpt4 key购买 nike

我最近才开始用Scrapy,所以对它不是很熟练,所以这真的是一个新手问题。

我正在抓取一些随机惯例进行练习,我抓取了名称和展位号,但我还想要来自公司的链接,这些链接位于新窗口内,我已经找到并存储了来自 anchor 标签的链接,但是我不知道如何抓取这些新链接,任何形式的帮助或指导都将非常有用

import scrapy

class ConventionSpider(scrapy.Spider):
name = 'convention'
allowed_domains = ['events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']
start_urls = ['https://events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']

def parse(self, response):
name = response.xpath('//*[@class="companyName"]')
number = response.xpath('//*[@class="boothLabel"]')
link = response.xpath('//*[@class="companyName"]')
for row, row1, row2 in zip(name, number, link):
company = row.xpath('.//*[@class="exhibitorName"]/text()').extract_first()
booth_num = row1.xpath('.//*[@class="boothLabel aa-mapIt"]/text()').extract_first()
url = row2.xpath('.//a/@href').extract_first()

yield {'Company': company,'Booth Number': booth_num}

最佳答案

参见此以供引用 https://github.com/NilanshBansal/Craigslist_Scrapy/blob/master/craigslist/spiders/jobs.py

import scrapy
from scrapy import Request

class ConventionSpider(scrapy.Spider):
name = 'convention'
# allowed_domains = ['events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']
start_urls = ['https://events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']

def parse(self, response):
name = response.xpath('//*[@class="companyName"]')
number = response.xpath('//*[@class="boothLabel"]')
link = response.xpath('//*[@class="companyName"]')
for row, row1, row2 in zip(name, number, link):
company = row.xpath('.//*[@class="exhibitorName"]/text()').extract_first()
booth_num = row1.xpath('.//*[@class="boothLabel aa-mapIt"]/text()').extract_first()
url = row2.xpath('.//a/@href').extract_first()

yield Request(url,callback=self.parse_page,meta={'Url':url,'Company': company,'Booth_Number': booth_num)

def parse_page(self,response):
company = response.meta.get('Company')
booth_num = response.meta.get('Booth Number')
website = response.xpath('//a[@class="aa-BoothContactUrl"]/text()').extract_first()

yield {'Company': company,'Booth Number': booth_num, 'Website': website}

编辑:注释行 allowed_domains 让爬虫也可以在其他域上工作。

https://stackoverflow.com/a/52792350 回复您的代码

关于python - 如何使用 Scrapy 抓取新链接,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52788125/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com