gpt4 book ai didi

python - 如何使用 BeautifulSoup4 从属性中未指定类或 id 的网站中抓取内容

转载 作者:行者123 更新时间:2023-12-04 07:16:45 25 4
gpt4 key购买 nike

我想将“a”标签中的文本(即只有名称“42mm Architecture”)和“服务范围、已建成项目的类型、已建成项目的位置、工作风格、网站”中的单独内容刮取为 CSV整个网页的文件头及其内容。
元素没有与之关联的类或 ID。所以我有点坚持如何正确提取这些细节,中间还有那些“br”和“b”标签。
在提供的代码块之前和之后有多个“p”标签。这里是 website .

<h2>
<a href="http://www.dezeen.com/tag/design-by-42mm-architecture" rel="noopener noreferrer" target="_blank">
42mm Architecture
</a>
|
<span style="color: #808080;">
Delhi | Top Architecture Firms/ Architects in India
</span>
</h2>
<!-- /wp:paragraph -->
<p>
<b>
Scope of services:
</b>
Architecture, Interiors, Urban Design.
<br/>
<b>
Types of Built Projects:
</b>
Residential, commercial, hospitality, offices, retail, healthcare, housing, Institutional
<br/>
<b>
Locations of Built Projects:
</b>
New Delhi and nearby states
<b>
<br/>
</b>
<b>
Style of work
</b>
<span style="font-weight: 400;">
: Contemporary
</span>
<br/>
<b>
Website
</b>
<span style="font-weight: 400;">
:
<a href="https://www.42mm.co.in/">
42mm.co.in
</a>
</span>
</p>
那么它是如何使用 BeautifulSoup4 完成的呢?

最佳答案

这个有点费时间!网页不完整,标签和标识符较少。添加更多内容,他们甚至没有拼写检查内容例如。 一个地方有一个标题Scope of Services另一个地方有Scope of services还有更多这样的!所以我所做的是粗略的提取,如果你也有分页的想法,我相信它会对你有所帮助。

import requests
from bs4 import BeautifulSoup
import csv

page = requests.get('https://www.re-thinkingthefuture.com/top-architects/top-architecture-firms-in-india-part-1/')
soup = BeautifulSoup(page.text, 'lxml')

# there are many h2 tags but we want the one without any class name
h2 = soup.find_all('h2', class_= '')

headers = []
contents = []
header_len = []
a_tags = []

for i in h2:
if i.find_next().name == 'a': # to make sure we do not grab the wrong tag
a_tags.append(i.find_next().text)
p = i.find_next_sibling()
contents.append(p.text)
h =[j.text for j in p.find_all('strong')] # some headings were bold in the website
headers.append(h)
header_len.append(len(h))

# since only some headings were in bold the max number of bold would give all headers
headers = headers[header_len.index(max(header_len))]

# removing the : from headings
headers = [i[:len(i)-1] for i in headers]

# inserted a new heading
headers.insert(0, 'Firm')

# n for traversing through headers list
# k for traversing through a_tags list
n =1
k =0

# this is the difficult part where the content will have all the details in one value including the heading like this
"""
Scope of services: Architecture, Interiors, Urban Design.Types of Built Projects: Residential, commercial, hospitality, offices, retail, healthcare, housing, InstitutionalLocations of Built Projects: New Delhi and nearby statesStyle of work: ContemporaryWebsite: 42mm.co.in
"""
# thus I am splitting it using the ':' and then splicing it from the start of the each heading

contents = [i.split(':') for i in contents]
for i in contents:
for j in i:
h = headers[n][:5]
if i.index(j) == 0:
i[i.index(j)] = a_tags[k]
n+=1
k+=1
elif h in j:
i[i.index(j)] = j[:j.index(h)]
j = j[:j.index(h)]
if n < len(headers)-1:
n+=1
n =1

# merging those extra values in the list if any
if len(i) == 7:
i[3] = i[3] + ' ' + i[4]
i.remove(i[4])

# writing into csv file
# if you don't want a line space between each row then add newline = '' argument in the open function below
with open('output.csv', 'w') as f:
writer = csv.writer(f)
writer.writerow(headers)
writer.writerows(contents)
这是输出:
enter image description here
如果您想分页,只需将页码添加到网址的末尾即可!
page_num = 1
while page_num <13:
page = requests.get(f'https://www.re-thinkingthefuture.com/top-architects/top-architecture-firms-in-india-part-1/{page_num}/')

# paste the above code starting from soup = BeautifulSoup(page.text, 'lxml')

page_num +=1
希望这会有所帮助,如果有任何错误,请告诉我。
编辑 1:
我忘了说最重要的部分对不起,如果有标签 no class名称,那么您仍然可以使用我在上面的代码中使用的标签来获取标签
h2 = soup.find_all('h2', class_= '')
这只是说给我所有 h2没有类名的标签。这本身有时可以是唯一标识符,因为我们正在使用 no class value来识别它。

关于python - 如何使用 BeautifulSoup4 从属性中未指定类或 id 的网站中抓取内容,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/68716788/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com