gpt4 book ai didi

python - 打印 "find_all"(使用 bs4 库)的结果返回一个空列表(但我引用的类存在)

转载 作者:行者123 更新时间:2023-12-01 07:29:48 25 4
gpt4 key购买 nike

我正在尝试为 https://www.cappex.com/scholarships 创建一个网络爬虫。我试图找到包含“ais-hits--item”类奖学金信息的每个 div。当使用 find_all (来自 bs4)时,我正在寻找的 div 没有被返回,我很困惑为什么。我对 python 相当陌生,但对 HTML 不太熟悉。 enter image description here有许多 div 相互嵌套,因此我尝试查找具有不同类的其他 div,它们都返回一个空列表 ([])。我是不是做错了什么?

import requests
from bs4 import BeautifulSoup


url = 'https://www.cappex.com/scholarships'
response = requests.get(url)

soup = BeautifulSoup(response.content, 'html.parser')
scholarships = soup.find_all('div', class_='ais-hits--item')

print scholarships

我期望有一个 div 列表,但输出是 []。

最佳答案

事实证明<div>标签未随页面源一起加载,因此无法被BeautifulSoup捕获。 。换句话说,网站加载后可能会触发一个事件,因此 bs4不会帮助你获取数据。您可以通过在页面源中搜索标签 ais-hits--item 进行验证。 .

话虽如此,您实际上可以直接在您发布的特定网站中查询数据。请记住,当您选择这样做时,网站是否希望您拥有该访问权限。

headers = {
'accept': 'application/json',
'content-type': 'application/x-www-form-urlencoded',
'Origin': 'https://www.cappex.com',
'Referer': 'https://www.cappex.com/scholarships',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'
}

json = {"requests":[{"indexName":"prod-scholarship","params":"query=&hitsPerPage=12&maxValuesPerFacet=10&page=0&attributesToRetrieve=%5B%22name%22%2C%22administeringAgency%22%2C%22deadline%22%2C%22deadlineFormatted%22%2C%22awardAmount%22%2C%22maxAward%22%2C%22averageAwardAmount%22%2C%22variableAwardAmount%22%2C%22renewable%22%2C%22objectID%22%5D&restrictHighlightAndSnippetArrays=true&facets=%5B%22deadline%22%2C%22awardAmount%22%2C%22renewable%22%2C%22firstGeneration%22%2C%22financialNeedRequired%22%2C%22lgbtqia%22%2C%22disability%22%2C%22nonUSCitizenEligible%22%2C%22genders%22%2C%22ethnicities%22%2C%22enrollmentLevels%22%5D&tagFilters="}]}

params = {
'x-algolia-agent': 'Algolia for vanilla JavaScript 3.27.1;instantsearch.js 2.8.0;JS Helper 2.26.0',
'x-algolia-application-id': 'MVAUKZTA2I',
'x-algolia-api-key': 'd9568940e07ac01d868893e44be784e8'
}

url = 'https://mvaukzta2i-dsn.algolia.net/1/indexes/*/queries'
r = requests.post(url, headers=headers, params=params, json=json)

这将获取网站的所有数据。例如:

results = r.json()['results']

results[0]['hits'][0]
Out[1]:
{'administeringAgency': 'My Best Mattress',
'renewable': False,
'name': 'MyBestMattress Scholarship',
'deadlineFormatted': 'July 31, 2020',
'awardAmount': 700.0,
'averageAwardAmount': 700.0,
'deadline': 1596153600000.0,
'variableAwardAmount': False,
'objectID': '52049',
'_highlightResult': {'administeringAgency': {'value': 'My Best Mattress',
'matchLevel': 'none',
'matchedWords': []},
'name': {'value': 'MyBestMattress Scholarship',
'matchLevel': 'none',
'matchedWords': []},
'deadlineFormatted': {'value': 'July 31, 2020',
'matchLevel': 'none',
'matchedWords': []},
'awardAmount': {'value': '700.0', 'matchLevel': 'none', 'matchedWords': []},
'id': {'value': '52049', 'matchLevel': 'none', 'matchedWords': []},
'averageAwardAmount': {'value': '700.0',
'matchLevel': 'none',
'matchedWords': []},
'deadline': {'value': '1.5961536E+12',
'matchLevel': 'none',
'matchedWords': []}}}

关于python - 打印 "find_all"(使用 bs4 库)的结果返回一个空列表(但我引用的类存在),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57262846/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com