gpt4 book ai didi

python - 如何使用 Beautiful Soup 从网站检索信息?

转载 作者:行者123 更新时间:2023-12-05 06:54:51 26 4
gpt4 key购买 nike

我遇到了一个任务,我必须使用爬虫从网站上检索信息。 (网址:https://www.onepa.gov.sg/cat/adventure)

该网站有多个产品。对于每个产品,它包含将我们定向到该单个产品网页的链接,我想收集所有链接。

screenshot of the webpage

screenshot of the HTML code

例如,其中一个产品的名称为:KNOTTY STUFF,我希望获得/class/details/c026829364 的 href

import requests
from bs4 import BeautifulSoup


def get_soup(url):
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, features="html.parser")
return soup

url = "https://www.onepa.gov.sg/cat/adventure"
soup = get_soup(url)
for i in soup.findAll("a", {"target": "_blank"}):
print(i.get("href"))

输出是 https://tech.gov.sg/report_vulnerability https://www.pa.gov.sg/feedback 其中不包括我正在寻找的内容:/class/details/c026829364

感谢任何帮助或协助,谢谢!

最佳答案

该网站是动态加载的,因此 requests 不支持它。但是,可以通过向以下地址发送 POST 请求来获取链接:

https://www.onepa.gov.sg/sitecore/shell/WebService/Card.asmx/GetCategoryCard

尝试使用内置的 re 搜索链接(正则表达式)模块

import re
import requests


URL = "https://www.onepa.gov.sg/sitecore/shell/WebService/Card.asmx/GetCategoryCard"

headers = {
"authority": "www.onepa.gov.sg",
"accept": "application/json, text/javascript, */*; q=0.01",
"x-requested-with": "XMLHttpRequest",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36",
"content-type": "application/json; charset=UTF-8",
"origin": "https://www.onepa.gov.sg",
"sec-fetch-site": "same-origin",
"sec-fetch-mode": "cors",
"sec-fetch-dest": "empty",
"referer": "https://www.onepa.gov.sg/cat/adventure",
"cookie": "visid_incap_2318972=EttdbbMDQMeRolY+XzbkN8tR5l8AAAAAQUIPAAAAAAAjkedvsgJ6Zxxk2+19JR8Z; SC_ANALYTICS_GLOBAL_COOKIE=d6377e975a10472b868e47de9a8a0baf; _sp_ses.075f=*; ASP.NET_SessionId=vn435hvgty45y0fcfrold2hx; sc_pview_shuser=; __AntiXsrfToken=30b776672938487e90fc0d2600e3c6f8; BIGipServerpool_PAG21PAPRPX00_443=3138016266.47873.0000; incap_ses_7221_2318972=5BC1VKygmjGGtCXbUiU2ZNRS5l8AAAAARKX8luC4fGkLlxnme8Ydow==; font_multiplier=0; AMCVS_DF38E5285913269B0A495E5A%40AdobeOrg=1; _sp_ses.603a=*; SC_ANALYTICS_SESSION_COOKIE=A675B7DEE34A47F9803ED6D4EC4A8355|0|vn435hvgty45y0fcfrold2hx; _sp_id.603a=d539f6d1-732d-4fca-8568-e8494f8e584c.1608930022.1.1608930659.1608930022.bfeb4483-a418-42bb-ac29-42b6db232aec; _sp_id.075f=5e6c62fd-b91d-408e-a9e3-1ca31ee06501.1608929756.1.1608930947.1608929756.73caa28b-624c-4c21-9ad0-92fd2af81562; AMCV_DF38E5285913269B0A495E5A%40AdobeOrg=1075005958%7CMCIDTS%7C18622%7CMCMID%7C88630464609134511097093602739558212170%7CMCOPTOUT-1608938146s%7CNONE%7CvVersion%7C4.4.1",
}

data = '{"cat":"adventure", "subcat":"", "sort":"", "filter":"[filter]", "cp":"[cp]"}'

response = requests.post(URL, data=data, headers=headers)
print(re.findall(r"<Link>(.*)<", response.content.decode("unicode_escape")))

输出:

['/class/details/c026829364', '/interest/details/i000027991', '/interest/details/i000009714']

关于python - 如何使用 Beautiful Soup 从网站检索信息?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65451101/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com