gpt4 book ai didi

python - 用 Python 抓取分页

转载 作者:行者123 更新时间:2023-11-28 18:08:54 25 4
gpt4 key购买 nike

我正在尝试从以下网站抓取航空公司的一些数据:http://www.airlinequality.com/airline-reviews/airasia-x[1] .

我设法获得了我需要的数据,但我在网页上的分页问题上苦苦挣扎。我正在尝试获取评论的所有标题(不仅是第一页中的标题)。

页面的链接格式为:http://www.airlinequality.com/airline-reviews/airasia-x/page/3/ 其中3 是页码。

我尝试遍历这些 URL 以及以下代码片段,但抓取分页无效。

# follow pagination links
for href in response.css('#main > section.layout-section.layout-2.closer-top > div.col-content > div > article > ul li a'):
yield response.follow(href, self.parse)

如何解决?

import scrapy
import re # for text parsing
import logging
from scrapy.crawler import CrawlerProcess


class AirlineSpider(scrapy.Spider):
name = 'airlineSpider'
# page to scrape
start_urls = ['http://www.airlinequality.com/review-pages/a-z-airline-reviews/']

def parse(self, response):
# take each element in the list of the airlines

for airline in response.css("div.content ul.items li"):
# go inside the URL for each airline
airline_url = airline.css('a::attr(href)').extract_first()

# Call parse_airline
next_page = airline_url
if next_page is not None:
yield response.follow(next_page, callback=self.parse_article)

# follow pagination links
for href in response.css('#main > section.layout-section.layout-2.closer-top > div.col-content > div > article > ul li a'):
yield response.follow(href, self.parse)

# to go to the pages inside the links (for each airline) - the page where the reviews are
def parse_article(self, response):
yield {
'appears_ulr': response.url,
# use sub to replace \n\t \r from the result
'title': re.sub('\s+', ' ', (response.css('div.info [itemprop="name"]::text').extract_first()).strip(' \t \r \n').replace('\n', ' ') ).strip(),
'reviewTitle': response.css('div.body .text_header::text').extract(),
#'total': response.css('#main > section.layout-section.layout-2.closer-top > div.col-content > div > article > div.pagination-total::text').extract_first().split(" ")[4],
}


process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'FEED_FORMAT': 'json',
'FEED_URI': 'air_test.json'
})

# minimizing the information presented on the scrapy log
logging.getLogger('scrapy').setLevel(logging.WARNING)
process.crawl(AirlineSpider)
process.start()

为了遍历航空公司,我使用以下代码解决了这个问题:它使用上面的代码:

req = Request("http://www.airlinequality.com/review-pages/a-z-airline-reviews/" , headers={'User-Agent': 'Mozilla/5.0'})
html_page = urlopen(req)
soupAirlines = BeautifulSoup(html_page, "lxml")

URL_LIST = []
for link in soupAirlines.findAll('a', attrs={'href': re.compile("^/airline-reviews/")}):
URL_LIST.append("http://www.airlinequality.com"+link.get('href'))

最佳答案

假设 scrapy 不是硬性要求,BeautifulSoup 中的以下代码将为您提供所有评论,解析出元数据,并最终输出 pandas DataFrame。从每条评论中提取的特定属性包括:

  • 评论标题
  • 评级(可用时)
  • 评分超出范围(即满分 10 分)
  • 查看全文
  • 审核日期戳
  • 评论是否经过验证

有一个处理分页的特定函数。它是一个递归函数,如果有下一页,我们再次调用该函数来解析新的 url,否则该函数调用结束。

from bs4 import BeautifulSoup
import requests
import pandas as pd
import re

# define global parameters
URL = 'http://www.airlinequality.com/airline-reviews/airasia-x'
BASE_URL = 'http://www.airlinequality.com'
MASTER_LIST = []

def parse_review(review):
"""
Parse important review meta data such as ratings, time of review, title,
etc.

Parameters
-------
review - beautifulsoup tag

Return
-------
outdf - pd.DataFrame
DataFrame representation of parsed review
"""

# get review header
header = review.find('h2').text

# get the numerical rating
base_review = review.find('div', {'itemprop': 'reviewRating'})
if base_review is None:
rating = None
rating_out_of = None
else:
rating = base_review.find('span', {'itemprop': 'ratingValue'}).text
rating_out_of = base_review.find('span', {'itemprop': 'bestRating'}).text

# get time of review
time_of_review = review.find('h3').find('time')['datetime']

# get whether review is verified
if review.find('em'):
verified = review.find('em').text
else:
verified = None

# get actual text of review
review_text = review.find('div', {'class': 'text_content'}).text

outdf = pd.DataFrame({'header': header,
'rating': rating,
'rating_out_of': rating_out_of,
'time_of_review': time_of_review,
'verified': verified,
'review_text': review_text}, index=[0])

return outdf

def return_next_page(soup):
"""
return next_url if pagination continues else return None

Parameters
-------
soup - BeautifulSoup object - required

Return
-------
next_url - str or None if no next page
"""
next_url = None
cur_page = soup.find('a', {'class': 'active'}, href=re.compile('airline-reviews/airasia'))
cur_href = cur_page['href']
# check if next page exists
search_next = cur_page.findNext('li').get('class')
if not search_next:
next_page_href = cur_page.findNext('li').find('a')['href']
next_url = BASE_URL + next_page_href
return next_url

def create_soup_reviews(url):
"""
iterate over each review, extract out content, and handle next page logic
through recursion

Parameters
-------
url - str - required
input url
"""
# use global MASTER_LIST to extend list of all reviews
global MASTER_LIST
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
reviews = soup.findAll('article', {'itemprop': 'review'})
review_list = [parse_review(review) for review in reviews]
MASTER_LIST.extend(review_list)
next_url = return_next_page(soup)
if next_url is not None:
create_soup_reviews(next_url)


create_soup_reviews(URL)


finaldf = pd.concat(MASTER_LIST)
finaldf.shape # (339, 6)

finaldf.head(2)
# header rating rating_out_of review_text time_of_review verified
#"if approved I will get my money back" 1 10 ✅ Trip Verified | Kuala Lumpur to Melbourne. ... 2018-08-07 Trip Verified
# "a few minutes error" 3 10 ✅ Trip Verified | I've flied with AirAsia man... 2018-08-06 Trip Verified

如果我要做整个网站,我会使用上面的代码并遍历每家航空公司 here .我会修改代码以包含名为“航空公司”的列,以便您知道每条评论对应的航空公司。

关于python - 用 Python 抓取分页,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51922830/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com