gpt4 book ai didi

Python 表格抓取

转载 作者:行者123 更新时间:2023-11-28 17:04:21 25 4
gpt4 key购买 nike

我正在尝试从 https://markets.wsj.com/ 中抓取“主要股票指数表”并想将其保存到我桌面上的文件夹中。这是我目前所拥有的:

import urllib.request
import json
import re

html = urllib.request.urlopen("https://markets.wsj.com/").read().decode('utf8')
json_data = re.findall(r'pws_bootstrap:(.*?)\s+,\s+country\:', html, re.S)
data = json.loads(json_data[0])

filename = "C:\Users\me\folder\sample.csv"
f = open(filename, "w")

for numbers in data['chart']:
for obs in numbers['Major Stock Indexes']:
f.write(str(obs['firstCol']) + "," + str(obs['dataCol']) + "," + str(obs['dataCol priceUp']) + str(obs['dataCol lastb priceUp']) + "\n")

print(obs.keys())

我收到错误:IndexError:列表索引超出范围

有什么想法可以解决我的问题吗?

最佳答案

你的 json_data 是一个空列表 [],你应该使用像 bs4 这样的抓取工具,如下所示:

from bs4 import BeautifulSoup
import urllib.request
html = urllib.request.urlopen("https://markets.wsj.com/").read().decode('utf8')
soup = BeautifulSoup(html, 'html.parser') # parse your html
t = soup.find('table', {'summary': 'Major Stock Indexes'}) # finds tag table with attribute summary equals to 'Major Stock Indexes'
tr = t.find_all('tr') # get all table rows from selected table
row_lis = [i.find_all('td') if i.find_all('td') else i.find_all('th') for i in tr if i.text.strip()] # construct list of data
print([','.join(x.text.strip() for x in i) for i in row_lis])

输出:

[',Last,Change,% CHG,',
'DJIA,26049.64,259.29,1.01%',
'Nasdaq,8017.90,71.92,0.91%',
'S&P 500,2896.74,22.05,0.77%',
'Russell 2000,1728.41,2.73,0.16%',
'Global Dow,3105.09,3.73,0.12%',
'Japan: Nikkei 225,22930.58,130.94,0.57%',
'Stoxx Europe 600,385.57,2.01,0.52%',
'UK: FTSE 100,7577.49,14.27,0.19%']

现在您可以遍历此列表并将其存储在 csv 中而不是打印它。

关于Python 表格抓取,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52049184/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com