gpt4 book ai didi

python - 如果缺少一件​​元素,则 Scrapy 不会抓取

转载 作者:太空宇宙 更新时间:2023-11-04 04:07:53 24 4
gpt4 key购买 nike

我在过去两天的几个小时内构建了我的第一个 scray 蜘蛛,但我现在被卡住了 - 我想要实现的主要目的是提取所有数据,以便稍后在 csv 中过滤它。现在,对我来说真正重要的数据(没有网页的公司)被删除了,因为如果一个项目有主页,scrapy 找不到我提供的 xpath。我在这里尝试了一个 if 语句,但它不起作用。

示例网站:https://www.achern.de/de/Wirtschaft/Unternehmen-A-Z/Unternehmen?view=publish&item=company&id=1345

我使用 xPath 选择器:response.xpath("//div[@class='cCore_contactInformationBlockWithIcon cCore_wwwIcon']/a/@href").extract()

示例非网站:https://www.achern.de/de/Wirtschaft/Unternehmen-A-Z/Unternehmen?view=publish&item=company&id=1512

蜘蛛代码:

# -*- coding: utf-8 -*-
import scrapy

class AchernSpider(scrapy.Spider):
name = 'achern'
allowed_domains = ['www.achern.de']
start_urls = ['https://www.achern.de/de/Wirtschaft/Unternehmen-A-Z/']



def parse(self, response):
for href in response.xpath("//ul[@class='cCore_list cCore_customList']/li[*][*]/a/@href"):
url = response.urljoin(href.extract())
yield scrapy.Request(url, callback= self.scrape)

def scrape(self, response):
#Extracting the content using css selectors
print("Processing:"+response.url)
firma = response.css('div>#cMpu_publish_company>h2.cCore_headline::text').extract()
anschrift = response.xpath("//div[contains(@class,'cCore_addressBlock_address')]/text()").extract()
tel = response.xpath("//div[@class='cCore_contactInformationBlockWithIcon cCore_phoneIcon']/text()").extract()
mail = response.xpath(".//div[@class='cCore_contactInformationBlock']//*[contains(text(), '@')]/text()").extract()
web1 = response.xpath("//div[@class='cCore_contactInformationBlockWithIcon cCore_wwwIcon']/a/@href").extract()
if "http:" not in web1:
web = "na"
else:
web = web1

row_data=zip(firma,anschrift,tel,mail,web1) #web1 must be changed to web but then it only give out "n" for every link
#Give the extracted content row wise
for item in row_data:
#create a dictionary to store the scraped info
scraped_info = {
'Firma' : item[0],
'Anschrift' : item[1] +' 77855 Achern',
'Telefon' : item[2],
'Mail' : item[3],
'Web' : item[4],
}

#yield or give the scraped info to scrapy
yield scraped_info

所以总的来说,即使“web”不存在,它也应该导出 DROPPED 项目..

希望有人能帮忙,问候S

最佳答案

使用

response.css(".cCore_wwwIcon > a::attr(href)").get()

给你一个None或网站地址,然后你可以使用来提供一个默认值:

website = response.css(".cCore_wwwIcon > a::attr(href)").get() or 'na'

此外,我重构了您的爬虫以使用 css 选择器。请注意,我使用 .get() 而不是 .extract() 来获取单个项目,而不是列表,这大大简化了代码。

import scrapy
from scrapy.crawler import CrawlerProcess

class AchernSpider(scrapy.Spider):
name = 'achern'
allowed_domains = ['www.achern.de']
start_urls = ['https://www.achern.de/de/Wirtschaft/Unternehmen-A-Z/']

def parse(self, response):
for url in response.css("[class*=cCore_listRow] > a::attr(href)").extract():
yield scrapy.Request(url, callback=self.scrape)

def scrape(self, response):
# Extracting the content using css selectors
firma = response.css('.cCore_headline::text').get()
anschrift = response.css('.cCore_addressBlock_address::text').get()
tel = response.css(".cCore_phoneIcon::text").get()
mail = response.css("[href^=mailto]::attr(href)").get().replace('mailto:', '')
website = response.css(".cCore_wwwIcon > a::attr(href)").get() or 'na'

scraped_info = {
'Firma': firma,
'Anschrift': anschrift + ' 77855 Achern',
'Telefon': tel,
'Mail': mail,
'Web': website,
}
yield scraped_info


if __name__ == "__main__":
p = CrawlerProcess()
p.crawl(AchernSpider)
p.start()

输出:

with website:
{'Firma': 'Wölfinger Fahrschule GmbH', 'Anschrift': 'Güterhallenstraße 8 77855 Achern', 'Telefon': '07841 6738132', 'Mail': 'info@woelfinger-fahrschule.de', 'Web': 'http://www.woelfinger-fahrschule.de'}

without website:
{'Firma': 'Zappenduster-RC Steffen Liepe', 'Anschrift': 'Am Kirchweg 16 77855 Achern', 'Telefon': '07841 6844700', 'Mail': 'Zappenduster-Rc@hotmail.de', 'Web': 'na'}

关于python - 如果缺少一件​​元素,则 Scrapy 不会抓取,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56913310/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com