gpt4 book ai didi

python - 如何解决json格式错误解码问题

转载 作者:太空宇宙 更新时间:2023-11-03 21:22:36 25 4
gpt4 key购买 nike

大家。需要打开和读取文件的帮助。

得到这个txt文件 - https://yadi.sk/i/1TH7_SYfLss0JQ

这是一本字典

{"id0":"url0", "id1":"url1", ..., "idn":"urln"}

但是它是使用json写入txt文件的。

#This is how I dump the data into a txt    
json.dump(after,open(os.path.join(os.getcwd(), 'before_log.txt'), 'a'))

所以,文件结构是{"id0":"url0", "id1":"url1", ..., "idn":"urln"}{"id2":"url2", "id3":"url3", ..., “id4”:“url4”}{“id5”:“url5”,“id6”:“url6”,...,“id7”:“url7”}

这都是一个字符串......

我需要打开它并检查重复的ID,删除并重新保存。

但是获取 - json.loads 显示 ValueError: Extra data

尝试过这些: How to read line-delimited JSON from large file (line by line) Python json.loads shows ValueError: Extra data json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 190)

但仍然出现该错误,只是在不同的位置。

现在我已经做到了:

with open('111111111.txt', 'r') as log:
before_log = log.read()
before_log = before_log.replace('}{',', ').split(', ')

mu_dic = []
for i in before_log:
mu_dic.append(i)

这消除了连续多个 {}{}{} 字典/json 的问题。

也许有更好的方法来做到这一点?

附注文件的制作方式如下:

json.dump(after,open(os.path.join(os.getcwd(), 'before_log.txt'), 'a'))    

最佳答案

您的文件大小为 9.5M,因此需要一段时间才能打开它并手动调试它。因此,使用 headtail工具(通常在任何 Gnu/Linux 发行版中都有)你会看到:

# You can use Python as well to read chunks from your file
# and see the nature of it and what it's causing a decode problem
# but i prefer head & tail because they're ready to be used :-D
$> head -c 217 111111111.txt
{"1933252590737725178": "https://instagram.fiev2-1.fna.fbcdn.net/vp/094927bbfd432db6101521c180221485/5CC0EBDD/t51.2885-15/e35/46950935_320097112159700_7380137222718265154_n.jpg?_nc_ht=instagram.fiev2-1.fna.fbcdn.net",
$> tail -c 219 111111111.txt
, "1752899319051523723": "https://instagram.fiev2-1.fna.fbcdn.net/vp/a3f28e0a82a8772c6c64d4b0f264496a/5CCB7236/t51.2885-15/e35/30084016_2051123655168027_7324093741436764160_n.jpg?_nc_ht=instagram.fiev2-1.fna.fbcdn.net"}
$> head -c 294879 111111111.txt | tail -c 12
net"}{"19332

因此,第一个猜测是您的文件是一系列格式错误的 JSON数据,最好的猜测是分开 }{通过\n进行进一步的操作。

因此,这里是如何使用 Python 解决问题的示例:

import json

input_file = '111111111.txt'
output_file = 'new_file.txt'

data = ''
with open(input_file, mode='r', encoding='utf8') as f_file:
# this with statement part can be replaced by
# using sed under your OS like this example:
# sed -i 's/}{/}\n{/g' 111111111.txt
data = f_file.read()
data = data.replace('}{', '}\n{')


seen, total_keys, to_write = set(), 0, {}
# split the lines of the in memory data
for elm in data.split('\n'):
# convert the line to a valid Python dict
converted = json.loads(elm)
# loop over the keys
for key, value in converted.items():
total_keys += 1
# if the key is not seen then add it for further manipulations
# else ignore it
if key not in seen:
seen.add(key)
to_write.update({key: value})

# write the dict's keys & values into a new file as a JSON format
with open(output_file, mode='a+', encoding='utf8') as out_file:
out_file.write(json.dumps(to_write) + '\n')

print(
'found duplicated key(s): {seen} from {total}'.format(
seen=total_keys - len(seen),
total=total_keys
)
)

输出:

found duplicated key(s): 43836 from 45367

最后,输出文件将是有效的 JSON文件,重复的键及其值将被删除。

关于python - 如何解决json格式错误解码问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54121686/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com