gpt4 book ai didi

python - 如何从网站上抓取 ID 标签及其内容(文本)?

转载 作者:行者123 更新时间:2023-12-04 03:24:19 27 4
gpt4 key购买 nike

this site 的顶部是 17 个 ID 标签:

1.Boxed warning
2.Indications
3.Dosage/Administration
4.Dosage forms
5.Contraindications
6.Warnings/Precautions
7.Adverse reactions
8.Drug interactions
9.Specific populations
10.Overdosage
11.Description
12.Clinical pharmacology
13.Nonclinical toxicology
14.Clinical studies
15.How supplied
16.Patient counseling
17.Medication guide

我想抓取页面并制作一个以这些标签为键的字典。我怎样才能做到这一点?到目前为止,这是我尝试过的:

urls = "https://www.drugs.com/pro/abacavir-lamivudine-and-zidovudine-tablets.html"
response = requests.get(urls)
soup = BeautifulSoup(response.text, 'html.parser')
data3 = soup.findAll('h2')
out = {}
y1 = []
y2 = []
for header in data3:
x0 = header.get('id')
y1.append(x0)
nextNode = header
while True:
nextNode = nextNode.nextSibling
if nextNode is None:
break
if isinstance(nextNode, NavigableString):
x1 = nextNode.strip()
if isinstance(nextNode, Tag):
if nextNode.name == "h2":
break

x2 = nextNode.get_text(strip=True).strip()
x3 = x1 + " " + x2
y2.append(x3)
print(y1,y2)

我得到了

Output I'm Getting: [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None] [content]

Desired Output: ['boxed warning', 'indications', 'dosage/administration', 'dosage forms', 'contraindications', 'warnings/precautions', 'adverse reactions', 'drug interactions', 'specific populations', 'overdosage', 'description', 'clinical pharmacology', 'nonclinical toxicology', 'clinical studies', 'how supplied', 'patient counseling', 'medication guide'] ['content present under boxed warning', 'content present under indications']

我怎样才能得到一个用标签列表替换所有 Nones 的字典或列表?我正在努力处理网页的结构。谢谢!

最佳答案

我不是 100% 确定您需要什么,但根据评论,我认为这就是您要找的东西。您可以轻松地将输出添加到列表或字典中。

import requests
from bs4 import BeautifulSoup
urls = "https://www.drugs.com/pro/abacavir-lamivudine-and-zidovudine-tablets.html"
response = requests.get(urls)
soup = BeautifulSoup(response.text, 'html.parser')
tags = soup.find('div', {'class': 'ddc-anchor-links'})

available_information = []

for tag in tags.find_all('a'):
available_information.append(tag.text)


print(available_information)
# output
['Boxed Warning', 'Indications and Usage', 'Dosage and Administration', 'Dosage Forms and Strengths', 'Contraindications', 'Warnings and Precautions', 'Adverse Reactions/Side Effects', 'Drug Interactions', 'Use In Specific Populations', 'Overdosage', 'Description', 'Clinical Pharmacology', 'Nonclinical Toxicology', 'Clinical Studies', 'How Supplied/Storage and Handling', 'Patient Counseling Information', 'Medication Guide']


您可以使用此代码获取每个目录的内容:

anchor_tags = []
soup = BeautifulSoup(response.text, 'html.parser')
tags = soup.find('div', {'class': 'ddc-toc-content'})
for tag in tags.find_all('a'):
anchor_tag = str(tag['href']).replace('#', '')
anchor_tags.append(anchor_tag)

for tag in anchor_tags:
anchor_tag = soup.find("a", {"id": tag})
header_tag = anchor_tag.find_next_sibling('h2')
# now you need to figure out how you want to store this information that is being extracted.

根据我们的聊天对话,您可以通过这种方式查询具有不同结构的多个页面。当您抓取更多具有不同结构的页面时,您将不得不修改 search_terms 和 known_tags。

import requests
from bs4 import BeautifulSoup

def get_soup(target_url):
response = requests.get(target_url)
soup = BeautifulSoup(response.text, 'html.parser')
return soup

def obtain_toc_content(soup):
available_information = []
anchor_tags = []
known_tags = ['div', 'ul']
search_terms = ['ddc-toc-content', 'ddc-anchor-links']
for tag, search_string in zip(known_tags, search_terms):
tag_found = bool(soup.find(tag, {'class': search_string}))
if tag_found:
toc = soup.find(tag, {'class': search_string})
for toc_tag in toc.find_all('a'):
available_information.append(toc_tag.text)
anchor_tag = str(toc_tag['href'])
anchor_tags.append(anchor_tag)

return available_information, anchor_tags


urls = ['https://www.drugs.com/pro/abacavir-lamivudine-and-zidovudine-tablets.html',
'https://www.drugs.com/ajovy.html','https://www.drugs.com/cons/a-b-otic.html']
for url in urls:
make_soup = get_soup(url)
results = obtain_toc_content(make_soup)
table_of_content = results[0]
toc_tags = results[1]

关于python - 如何从网站上抓取 ID 标签及其内容(文本)?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/67888938/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com