gpt4 book ai didi

python beautiful-soap json - 抓取一页但不抓取其他类似的

转载 作者:太空宇宙 更新时间:2023-11-04 01:47:29 25 4
gpt4 key购买 nike

我试图抓取一个营养网站,下面的代码有效

import requests
from bs4 import BeautifulSoup
import json
import re

page = requests.get("https://nutritiondata.self.com/facts/nut-and-seed-products/3071/1")
soup = BeautifulSoup(page.content, 'html.parser')

scripts = soup.find_all("script")
for script in scripts:
if 'foodNutrients = ' in script.text:
jsonStr = script.text
jsonStr = jsonStr.split('foodNutrients =')[-1]
jsonStr = jsonStr.rsplit('fillSpanValues')[0]
jsonStr = jsonStr.rsplit(';',1)[0]
jsonStr = "".join(jsonStr.split())

valid_json = re.sub(r'([{,:])(\w+)([},:])', r'\1"\2"\3', jsonStr)
jsonObj = json.loads(valid_json)

# These are in terms of 100 grams. I also calculated for per serving
g_per_serv = int(jsonObj['FOODSERVING_WEIGHT_1'].split('(')[-1].split('g')[0])

for k, v in jsonObj.items():
if k == 'NUTRIENT_0':
conv_v = (float(v)*g_per_serv)/100

print ('%s : %s (per 100 grams) | %s (per serving %s' %(k, round(float(v)), round(float(conv_v)), jsonObj['FOODSERVING_WEIGHT_1'] ))

但是当我尝试在同一域中的其他几乎相同的网页上使用它时却没有。例如,如果我使用

page = requests.get("https://nutritiondata.self.com/facts/vegetables-and-vegetable-products/2383/2")

我得到了错误

Traceback (most recent call last):
File "scrape_test_2.py", line 20, in <module>
jsonObj = json.loads(valid_json)
File "/Users/benjamattesjaroen/anaconda3/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/Users/benjamattesjaroen/anaconda3/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Users/benjamattesjaroen/anaconda3/lib/python3.7/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 1 column 5446 (char 5445)

查看这两个页面的源代码,它们在某种意义上看起来是相同的

<script type="text/javascript">
<!--
foodNutrients = { NUTRIENT_142: ........

这是被抓取的部分。

我一整天都在看这个,有谁知道如何让这个脚本对两个页面都有效,这里有什么问题?

最佳答案

我会改用 hjson,它允许不带引号的键并简单地提取整个 foodNutrient 变量并进行解析,而不是一遍又一遍地操作字符串。


您的错误:

目前你的失败是因为至少一个源数组中的元素数量不同,因此你要清理的正则表达式是不合适的。我们只检查第一个已知事件...

在第一个 url 中,在您使用正则表达式进行清理之前:

aifr:"[ -35, -10 ]"

之后:

"aifr":"[-35,-10]"

接下来你从一个不同长度的数组开始:

aifr:"[ 163, 46, 209, 179, 199, 117, 11, 99, 7, 5, 82 ]"

在正则表达式替换之后,而不是:

"aifr":"[ 163, 46, 209, 179, 199, 117, 11, 99, 7, 5, 82 ]"

你有:

"aifr":"[163,"46",209,"179",199,"117",11,"99",7,"5",82]"

即无效的 json。不再有很好分隔的键:值对。


简而言之:

使用hjson更简单。或者适当更新正则表达式以处理可变长度数组。

import requests, re, hjson

urls = ['https://nutritiondata.self.com/facts/nut-and-seed-products/3071/1','https://nutritiondata.self.com/facts/vegetables-and-vegetable-products/2383/2']

p = re.compile(r'foodNutrients = (.*?);')

with requests.Session() as s:
for url in urls:
r = s.get(url)
jsonObj = hjson.loads(p.findall(r.text)[0])
serving_weight = jsonObj['FOODSERVING_WEIGHT_1']
g_per_serv = int(serving_weight.split('(')[-1].split('g')[0])
nutrient_0 = jsonObj['NUTRIENT_0']
conv_v = float(nutrient_0)*g_per_serv/100
print('%s : %s (per 100 grams) | %s (per serving %s' %(nutrient_0, round(float(nutrient_0)), round(float(conv_v)), serving_weight))

关于python beautiful-soap json - 抓取一页但不抓取其他类似的,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58777555/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com