gpt4 book ai didi

python - Scrapy,从第二组链接中抓取页面

转载 作者:太空宇宙 更新时间:2023-11-04 07:59:08 25 4
gpt4 key购买 nike

我今天一直在浏览 Scrapy 文档并试图获得一个工作版本 - https://docs.scrapy.org/en/latest/intro/tutorial.html#our-first-spider - 一个真实世界的例子。我的示例略有不同,因为它有 2 个下一页,即

start_url > city page > unit page

就是我要抓取数据的单位页面。

我的代码:

import scrapy


class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://www.unitestudents.com/',
]

def parse(self, response):
for quote in response.css('div.property-body'):
yield {
'name': quote.xpath('//span/a/text()').extract(),
'type': quote.xpath('//div/h4/text()').extract(),
'price_amens': quote.xpath('//div/p/text()').extract(),
'distance_beds': quote.xpath('//li/p/text()').extract()
}

# Purpose is to crawl links of cities
next_page = response.css('a.listing-item__link::attr(href)').extract_first()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)

# Purpose is to crawl links of units
next_unit_page = response.css(response.css('a.text-highlight__inner::attr(href)').extract_first())
if next_unit_page is not None:
next_unit_page = response.urljoin(next_unit_page)
yield scrapy.Request(next_unit_page, callback=self.parse)

但是当我运行它时我得到:

INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

所以我认为我的代码未设置为检索上述流程中的链接,但我不确定如何最好地做到这一点?

更新流程:

Main page > City page > Building page > Unit page

它仍然是我要从中获取数据的单元页面。

更新代码:

import scrapy


class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://www.unitestudents.com/',
]

def parse(self, response):
for quote in response.css('div.site-wrapper'):
yield {
'area_name': quote.xpath('//div/ul/li/a/span/text()').extract(),
'type': quote.xpath('//div/div/div/h1/span/text()').extract(),
'period': quote.xpath('/html/body/div/div/section/div/form/h4/span/text()').extract(),
'duration_weekly': quote.xpath('//html/body/div/div/section/div/form/div/div/em/text()').extract(),
'guide_total': quote.xpath('//html/body/div/div/section/div/form/div/div/p/text()').extract(),
'amenities': quote.xpath('//div/div/div/ul/li/p/text()').extract(),
}

# Purpose is to crawl links of cities
next_page = response.xpath('//html/body/div/footer/div/div/div/ul/li/a[@class="listing-item__link"]/@href').extract()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)

# Purpose is to crawl links of units
next_unit_page = response.xpath('//li/div/h3/span/a/@href').extract()
if next_unit_page is not None:
next_unit_page = response.urljoin(next_unit_page)
yield scrapy.Request(next_unit_page, callback=self.parse)

# Purpose to crawl crawl pages on full unit info

last_unit_page = response.xpath('//div/div/div[@class="content__btn"]/a/@href').extract()
if last_unit_page is not None:
last_unit_page = response.urljoin(last_unit_page)
yield scrapy.Request(last_unit_page, callback=self.parse)

最佳答案

让我们从逻辑开始:

  1. 抓取主页 - 获取所有城市
  2. 抓取城市页面 - 获取所有单元 url
  3. 抓取单元页面 - 获取所有需要的数据

我在下面的 scrapy 蜘蛛中做了一个例子来说明如何实现它。我无法找到您在示例代码中提到的所有信息,但我希望代码足够清晰,让您能够理解它的作用以及如何添加所需的信息。

import scrapy


class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://www.unitestudents.com/',
]

# Step 1
def parse(self, response):
for city in response.xpath('//select[@id="frm_homeSelect_city"]/option[not(contains(text(),"Select your city"))]/text()').extract(): # Select all cities listed in the select (exclude the "Select your city" option)
yield scrapy.Request(response.urljoin("/"+city), callback=self.parse_citypage)

# Step 2
def parse_citypage(self, response):
for url in response.xpath('//div[@class="property-header"]/h3/span/a/@href').extract(): #Select for each property the url
yield scrapy.Request(response.urljoin(url), callback=self.parse_unitpage)

# I could not find any pagination. Otherwise it would go here.

# Step 3
def parse_unitpage(self, response):
unitTypes = response.xpath('//div[@class="room-type-block"]/h5/text()').extract() + response.xpath('//h4[@class="content__header"]/text()').extract()
for unitType in unitTypes: # There can be multiple unit types so we yield an item for each unit type we can find.
yield {
'name': response.xpath('//h1/span/text()').extract_first(),
'type': unitType,
# 'price': response.xpath('XPATH GOES HERE'), # Could not find a price on the page
# 'distance_beds': response.xpath('XPATH GOES HERE') # Could not find such info
}

我认为代码非常简洁。评论应该阐明为什么我选择使用 for 循环。如果有什么不清楚的地方,请告诉我,我会尽力解释。

关于python - Scrapy,从第二组链接中抓取页面,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43493659/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com