gpt4 book ai didi

python - 使用python将嵌套的json拆分为两个/多个文件

转载 作者:行者123 更新时间:2023-12-05 03:48:30 26 4
gpt4 key购买 nike

我有嵌套的 json 文件,其大小为 180MB,最多包含 280000 个条目。我的 json 文件数据看起来像

{ 
"images": [
{"id": 0, "img_name": "abc.jpg", "category": "plants", "sub-catgory": "sea-plants", "object_name": "algae", "width": 640, "height": 480, "priority": "high"},
{"id": 1, "img_name": "xyz.jpg", "category": "animals", "sub-catgory": "sea-animals", "object_name": "fish", "width": 640, "height": 480, "priority": "low"},
{"id": 2, "img_name": "animal.jpg", "category": "plants", "sub-catgory": "sea-plants", "object_name": "algae_a", "width": 640, "height": 480, "priority": "high"},
{"id": 3, "img_name": "plant.jpg", "category": "animals", "sub-catgory": "sea-animals", "object_name": "fish", "width": 640, "height": 480, "priority": "low"}
],
"annotations": [
{"id": 0, "image_id": 0, "bbox": [42.56565, 213.75443, 242.73315, 106.09524], "joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "right", "camera_valid": 0},
{"id": 1, "image_id": 1, "bbox": [52.56565, 313.75443, 342.73315, 206.09524], "joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "right", "camera_valid": 0},
{"id": 2, "image_id": 2, "bbox": [72.56565, 713.75443, 742.73315, 706.09524], "joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "left", "camera_valid": 1},
{"id": 3, "image_id": 3, "bbox": [12.56565, 113.75443, 142.73315, 106.09524], "joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "left", "camera_valid": 1}
]
}

注意所有的json数据都在一行,为了更好的阅读,我分4行贴出来了。

我的问题是如何将这个json文件数据拆分或划分成小文件甚至两个文件?由于我的 json 文件嵌套有两个主要类别 imagesannotations。此文件的层次结构应与上面在分割文件中相同(意味着 imagesannotations 必须与相同的 ID 一起存储在一个文件中)。

例如:通过上面的 json 数据,其中有 4 个用于 images 的条目和 4 个用于 annotations 的条目,在将新数据拆分/划分为两个文件之后json 文件应如下所示(每个新生成的文件中 images 有 2 个条目,annotations 有 2 个条目)

JSON file_1 数据:

{ 
"images": [
{"id": 0, "img_name": "abc.jpg", "category": "plants", "sub-catgory": "sea-plants", "object_name": "algae", "width": 640, "height": 480, "priority": "high"},
{"id": 1, "img_name": "xyz.jpg", "category": "animals", "sub-catgory": "sea-animals", "object_name": "fish", "width": 640, "height": 480, "priority": "low"}
],
"annotations": [
{"id": 0, "image_id": 0, "bbox": [42.56565, 213.75443, 242.73315, 106.09524], "joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "right", "camera_valid": 0},
{"id": 1, "image_id": 1, "bbox": [52.56565, 313.75443, 342.73315, 206.09524], "joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "right", "camera_valid": 0}
]
}

JSON file_2 数据

{ 
"images": [
{"id": 2, "img_name": "animal.jpg", "category": "plants", "sub-catgory": "sea-plants", "object_name": "algae", "width": 640, "height": 480, "priority": "high"},
{"id": 3, "img_name": "plant.jpg", "category": "animals", "sub-catgory": "sea-animals", "object_name": "fish", "width": 640, "height": 480, "priority": "low"}
],
"annotations": [
{"id": 2, "image_id": 2, "bbox": [72.56565, 713.75443, 742.73315, 706.09524], "joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "left", "camera_valid": 1},
{"id": 3, "image_id": 3, "bbox": [12.56565, 113.75443, 142.73315, 106.09524], "joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "left", "camera_valid": 1}
]
}

我在stackoverflow和github上查了很多问题都无法解决我的问题。存在一些解决方案,但不适用于嵌套的 json 数据。

Here is json-splitter on github ,它不适用于嵌套的 json。

另一个question on stackoverflow , 它可以工作,但只能用于小文件,因为很难提供特定的 ID 或数据来一个一个地删除条目。

我尝试了以下来自 this github post 的代码

with open(sys.argv[1],'r') as infile:
o = json.load(infile)
chunkSize = 4550
for i in xrange(0, len(o), chunkSize):
with open(sys.argv[1] + '_' + str(i//chunkSize) + '.json', 'w') as outfile:
json.dump(o[i:i+chunkSize], outfile)

但还是不能解决我的问题。我在哪里遗漏了什么?我知道关于这个问题有很多问题和答案,但由于嵌套数据,任何解决方案都不适用于我的情况。我是 Python 的新手,所以经过大量工作后我无法解决我的问题。寻找有值(value)的建议和解决方案。谢谢

最佳答案

下面的代码将为您进行拆分。

import json

d = {
"images": [
{"id": 0, "img_name": "abc.jpg", "category": "plants", "sub-catgory": "sea-plants", "object_name": "algae",
"width": 640, "height": 480, "priority": "high"},
{"id": 5, "img_name": "xyz.jpg", "category": "animals", "sub-catgory": "sea-animals", "object_name": "fish",
"width": 640, "height": 480, "priority": "low"},
{"id": 7, "img_name": "abc.jpg", "category": "plants", "sub-catgory": "sea-plants", "object_name": "algae",
"width": 640, "height": 480, "priority": "high"},
{"id": 9, "img_name": "xyz.jpg", "category": "animals", "sub-catgory": "sea-animals", "object_name": "fish",
"width": 640, "height": 480, "priority": "low"},
{"id": 99, "img_name": "xyz.jpg", "category": "animals", "sub-catgory": "sea-animals", "object_name": "fish",
"width": 640, "height": 480, "priority": "low"}
],
"annotations": [{"id": 0, "image_id": 0, "bbox": [42.56565, 213.75443, 242.73315, 106.09524],
"joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "right", "camera_valid": 1},
{"id": 5, "image_id": 5, "bbox": [42.56565, 213.75443, 242.73315, 106.09524],
"joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "left", "camera_valid": 1},
{"id": 7, "image_id": 0, "bbox": [42.56565, 213.75443, 242.73315, 106.09524],
"joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "right", "camera_valid": 1},
{"id": 9, "image_id": 5, "bbox": [42.56565, 213.75443, 242.73315, 106.09524],
"joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "left", "camera_valid": 1},
{"id": 99, "image_id": 5, "bbox": [42.56565, 213.75443, 242.73315, 106.09524],
"joints_valid": [[1], [1], [1], [1], [0], [0]], "camera": "left", "camera_valid": 1}
]
}

NUM_OF_ENTRIES_IN_FILE = 2
counter = 0
# assuming the images and annotations lists sorted with the same ids
while (counter + 1) * NUM_OF_ENTRIES_IN_FILE <= len(d['images']):
temp = {'images': d['images'][counter * NUM_OF_ENTRIES_IN_FILE: (counter + 1) * NUM_OF_ENTRIES_IN_FILE],
'annotations': d['annotations'][counter * NUM_OF_ENTRIES_IN_FILE: (counter + 1) * NUM_OF_ENTRIES_IN_FILE]}
with open(f'out_{counter}.json', 'w') as f:
json.dump(temp, f)
counter += 1
reminder = len(d['images']) % NUM_OF_ENTRIES_IN_FILE
if reminder > 0:
reminder = reminder * -1
counter += 1
temp = {'images': d['images'][reminder:],
'annotations': d['annotations'][reminder:]}
with open(f'out_{counter}.json', 'w') as f:
json.dump(temp, f)

关于python - 使用python将嵌套的json拆分为两个/多个文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64354154/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com