gpt4 book ai didi

python - 获取干净数据: Beautiful Soup is enough or I must use Regex as well?

转载 作者:太空宇宙 更新时间:2023-11-03 15:48:27 25 4
gpt4 key购买 nike

我正在学习 Python 中的 Beautiful Soup 和字典。我正在关注斯坦福大学的 Beautiful Soup 简短教程,可以在这里找到:http://web.stanford.edu/~zlotnick/TextAsData/Web_Scraping_with_Beautiful_Soup.html

由于禁止访问网页,我已将教程中提供的文本存储为字符串,然后将字符串 soup 转换为 soup 对象。打印输出如下:

    print(soup_string)

<html><body><div class="ec_statements"><div id="legalert_title"><a
href="/Legislation-and-Politics/Legislative-Alerts/Letter-to-Senators-
Urging-Them-to-Support-Cloture-and-Final-Passage-of-the-Paycheck-
Fairness-Act-S.2199">'Letter to Senators Urging Them to Support Cloture
and Final Passage of the Paycheck Fairness Act (S.2199)
</a>
</div>
<div id="legalert_date">
September 10, 2014
</div>
</div>
<div class="ec_statements">
<div id="legalert_title">
<a href="/Legislation-and-Politics/Legislative-Alerts/Letter-to-
Representatives-Urging-Them-to-Vote-on-the-Highway-Trust-Fund-Bill">
Letter to Representatives Urging Them to Vote on the Highway Trust Fund Bill
</a>
</div>
<div id="legalert_date">
July 30, 2014
</div>
</div>
<div class="ec_statements">
<div id="legalert_title">
<a href="/Legislation-and-Politics/Legislative-Alerts/Letter-to-Representatives-Urging-Them-to-Vote-No-on-the-Legislation-Providing-Supplemental-Appropriations-for-the-Fiscal-Year-Ending-Sept.-30-2014">
Letter to Representatives Urging Them to Vote No on the Legislation Providing Supplemental Appropriations for the Fiscal Year Ending Sept. 30, 2014
</a>
</div>
<div id="legalert_date">
July 30, 2014
</div>
</div>
</body></html>

在某些时候,导师会捕获 soup 对象中具有标签“div”、class_="ec_statements"的所有元素。

   letters = soup_string.find_all("div", class_="ec_statements")

然后导师说:

“我们将遍历信件集合中的所有项目,对于每一项,提取名称并将其作为我们字典中的键。该值将是另一个字典,但我们尚未找到其他项目的内容,因此我们只需创建分配一个空的 dict 对象。”

此时,我采取了不同的方法,我决定首先将数据存储在列表中,然后存储在数据框中。代码如下:

lobbying_1 = []
lobbying_2 = []
lobbying_3 = []
for element in letters:
lobbying_1.append(element.a.get_text())
lobbying_2.append(element.a.attrs.get('href'))
lobbying_3.append(element.find(id="legalert_date").get_text())
df =pd.DataFrame([])
df = pd.DataFrame(lobbying_1, columns = ['Name'] )
df['href'] = lobbying_2
df['Date'] = lobbying_3

输出如下:

print(df)

Name \
0 \n 'Letter to Senators Urging Them to S...
1 \n Letter to Representatives Urging Th...
2 \n Letter to Representatives Urging Th...

href \
0 /Legislation-and-Politics/Legislative-Alerts/L...
1 /Legislation-and-Politics/Legislative-Alerts/L...
2 /Legislation-and-Politics/Legislative-Alerts/L...

Date
0 \n September 10, 2014\n
1 \n July 30, 2014\n
2 \n July 30, 2014\n

我的问题是:有没有办法通过 Beautiful Soup 获得更干净的数据,即没有\n 和空格的字符串,而只是真正的值?或者我必须使用正则表达式对数据进行后处理?

我们将不胜感激您的建议。

最佳答案

要删除文本中的换行符,请在调用 get_text() 时传递 strip=True:

for element in letters:
lobbying_1.append(element.a.get_text(strip=True))
lobbying_2.append(element.a.attrs.get('href'))
lobbying_3.append(element.find(id="legalert_date").get_text(strip=True))

当然,这是假设您仍然希望数据采用 DataFrame 的形式。

关于python - 获取干净数据: Beautiful Soup is enough or I must use Regex as well?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41554113/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com