gpt4 book ai didi

python - 无法使用 beautifulsoup 3 从 json 脚本中提取所有 URL

转载 作者:行者123 更新时间:2023-12-01 08:28:08 33 4
gpt4 key购买 nike

   import requests
from bs4 import BeautifulSoup
import json
import re

url = "https://www.daraz.pk/catalog/?q=dell&_keyori=ss&from=input&spm=a2a0e.searchlist.search.go.57446b5079XMO8"
page = requests.get(url)

print(page.status_code)
print(page.text)
soup = BeautifulSoup(page.text, 'html.parser')
print(soup.prettify())



alpha = soup.find_all('script',{'type':'application/ld+json'})

jsonObj = json.loads(alpha[1].text)

下面是从json对象中查找所有相关产品信息的代码

for item in jsonObj['itemListElement']:
name = item['name']
price = item['offers']['price']
currency = item['offers']['priceCurrency']
availability = item['offers']['availability'].split('/')[-1]
availability = [s for s in re.split("([A-Z][^A-Z]*)", availability) if s]
availability = ' '.join(availability)

这是提取 json 脚本 URL 的代码

    url = item['url']

print('Availability: %s Price: %0.2f %s Name: %s' %(availability,float(price), currency,name, url))

下面是从 csv 中提取数据的代码:

outfile = open('products.csv','w', newline='')
writer = csv.writer(outfile)
writer.writerow(["name", "type", "price", "priceCurrency", "availability" ])

alpha = soup.find_all('script',{'type':'application/ld+json'})

jsonObj = json.loads(alpha[1].text)

for item in jsonObj['itemListElement']:
name = item['name']
type = item['@type']
url = item['url']
price = item['offers']['price']
currency = item['offers']['priceCurrency']
availability = item['offers']['availability'].split('/')[-1]

文件创建 header ,但 CSV 中没有 URL 的数据

writer.writerow([name, type, price, currency, availability, URL ])
outfile.close()

最佳答案

首先,您不在那里包含标题。没什么大不了的,只是第一行的 url 列中的标题有一个空白。因此,包括:

writer.writerow(["name", "type", "price", "priceCurrency", "availability", "url" ]) 

其次,将字符串存储为 url,然后在编写器中引用 URLURL 没有任何值。事实上,它应该给出 URL is not Defined 或类似错误。

由于您已在代码中使用 url ,其中 url = "https://www.daraz.pk/catalog/?q=dell&_keyori=ss&from=input&spm=a2a0e.searchlist .search.go.57446b5079XMO8",我也可能会将变量名称更改为 url_text 之类的名称。

我可能还会使用变量 type_texttype 以外的其他变量,因为 type 是 python 中的内置函数。

但是你需要更改为:

writer.writerow([name, type, price, currency, availability, url ])
outfile.close()

完整代码:

import requests
from bs4 import BeautifulSoup
import json
import csv

url = "https://www.daraz.pk/catalog/?q=dell&_keyori=ss&from=input&spm=a2a0e.searchlist.search.go.57446b5079XMO8"
page = requests.get(url)

print(page.status_code)
print(page.text)
soup = BeautifulSoup(page.text, 'html.parser')
print(soup.prettify())



alpha = soup.find_all('script',{'type':'application/ld+json'})

jsonObj = json.loads(alpha[1].text)

outfile = open('c:\products.csv','w', newline='')
writer = csv.writer(outfile)
writer.writerow(["name", "type", "price", "priceCurrency", "availability" , "url"])

for item in jsonObj['itemListElement']:
name = item['name']
type_text = item['@type']
url_text = item['url']
price = item['offers']['price']
currency = item['offers']['priceCurrency']
availability = item['offers']['availability'].split('/')[-1]

writer.writerow([name, type_text, price, currency, availability, url_text ])

outfile.close()

关于python - 无法使用 beautifulsoup 3 从 json 脚本中提取所有 URL,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54088382/

33 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com