gpt4 book ai didi

java - 将 JSONArray 拆分为更小的 JSONArray

转载 作者:行者123 更新时间:2023-12-02 05:01:21 26 4
gpt4 key购买 nike

我遇到过一种情况,org.json.JSONArray 对象非常大,这最终会导致延迟和其他问题。因此,我们决定将 JSONArray 分割成更小的 block 。例如,如果 JSONArray 是这样的:

- [{"alt_party_id_type":"xyz","first_name":"child1ss","status":"1","dob":"2014-10-02 00:00:00.0","last_name":"childSs"},
{"alt_party_id_type":"xyz","first_name":"suga","status":"1","dob":"2014-11-05 00:00:00.0","last_name":"test"},
{"alt_party_id_type":"xyz","first_name":"test4a","status":"1","dob":"2000-11-05 00:00:00.0","last_name":"test4s"},
{"alt_party_id_type":"xyz","first_name":"demo56","status":"0","dob":"2000-11-04 00:00:00.0","last_name":"Demo5"},{"alt_party_id_type":"xyz","first_name":"testsss","status":"1","dob":"1900-01-01 00:00:00.0","last_name":"testssssssssss"},{"alt_party_id_type":"xyz","first_name":"Demo1234","status":"0","dob":"2014-11-21 00:00:00.0","last_name":"Demo1"},{"alt_party_id_type":"xyz","first_name":"demo2433","status":"1","dob":"2014-11-13 00:00:00.0","last_name":"demo222"},{"alt_party_id_type":"xyz","first_name":"demo333","status":"0","dob":"2014-11-12 00:00:00.0","last_name":"demo344"},{"alt_party_id_type":"xyz","first_name":"Student","status":"1","dob":"2001-12-03 00:00:00.0","last_name":"StudentTest"}]

我需要帮助将 JSONArray 分成三个 JSONArray:

- [{"alt_party_id_type":"xyz","first_name":"child1ss","status":"1","dob":"2014-10-02 00:00:00.0","last_name":"childSs"}, {"alt_party_id_type":"xyz","first_name":"suga","status":"1","dob":"2014-11-05 00:00:00.0","last_name":"test"}, {"alt_party_id_type":"xyz","first_name":"test4a","status":"1","dob":"2000-11-05 00:00:00.0","last_name":"test4s"}]


- [{"alt_party_id_type":"xyz","first_name":"demo56","status":"0","dob":"2000-11-04 00:00:00.0","last_name":"Demo5"}, {"alt_party_id_type":"xyz","first_name":"testsss","status":"1","dob":"1900-01-01 00:00:00.0","last_name":"testssssssssss"}, {"alt_party_id_type":"xyz","first_name":"Demo1234","status":"0","dob":"2014-11-21 00:00:00.0","last_name":"Demo1"}]


- [{"alt_party_id_type":"xyz","first_name":"demo2433","status":"1","dob":"2014-11-13 00:00:00.0","last_name":"demo222"}, {"alt_party_id_type":"xyz","first_name":"demo333","status":"0","dob":"2014-11-12 00:00:00.0","last_name":"demo344"}, {"alt_party_id_type":"xyz","first_name":"Student","status":"1","dob":"2001-12-03 00:00:00.0","last_name":"StudentTest"}]

有人可以帮我吗?我尝试了很多选择但都失败了。

最佳答案

在处理巨大的输入文件时,您应该使用流式传输方式,而不是将整个文档加载到内存中,以减少内存占用,避免 OutOfMemoryError,并可以在读取输入时开始处理。 JSONArray 对流式传输的支持很少,因此我建议查看 Jackson's streaming API , GSON streaming ,或类似的。

话虽如此,如果您坚持使用 JSONArray,则可以使用 JSONTokener 拼凑出流式传输方法。下面是一个示例程序,它将流式传输输入文件并创建单独的 JSON 文档,每个文档最多包含 10 个元素。

import java.io.*;
import java.util.*;
import org.json.*;

public class JsonSplit {

private static final int BATCH_SIZE = 10;

public static void flushFile(List<Object> objects, int d) throws Exception {
try (FileOutputStream output = new FileOutputStream("split-" + d
+ ".json");
Writer writer = new OutputStreamWriter(output, "UTF-8")) {
JSONArray jsonArray = new JSONArray(objects);
jsonArray.write(writer);
}
}

public static void main(String[] args) throws Exception {
int outputIndex = 0;
try (InputStream input = new BufferedInputStream(
new FileInputStream(args[0]))) {
JSONTokener tokener = new JSONTokener(input);
if (tokener.nextClean() != '[') {
throw tokener.syntaxError("Expected start of JSON array");
}
List<Object> jsonObjects = new ArrayList<>();
while (tokener.nextClean() != ']') {
// Back up one character, it's part of the next value.
tokener.back();
// Read the next value in the array.
jsonObjects.add(tokener.nextValue());
// Flush if max objects per file has been reached.
if (jsonObjects.size() == BATCH_SIZE) {
flushFile(jsonObjects, outputIndex);
jsonObjects.clear();
outputIndex++;
}
// Read and discard commas between array elements.
if (tokener.nextClean() != ',') {
tokener.back();
}
}
if (!jsonObjects.isEmpty()) {
flushFile(jsonObjects, outputIndex);
}
// Verify that end of input is reached.
if (tokener.nextClean() != 0) {
throw tokener.syntaxError("Expected end of document");
}
}

}

}

要了解为什么大文件需要流式传输方法,请下载或创建一个巨大的 JSON 文件,然后尝试运行不进行流式传输的简单实现。下面是一个 Perl 命令,用于创建一个包含 1,000,000 个元素且文件大小约为 16 MB 的 JSON 数组。

perl -le 'print "["; for (1..1_000_000) {print "," unless $_ == 1; print "{\"id\": " . int(rand(1_000_000)) . "}";} print "]"' > input_huge.json

如果您在此输入上运行 JsonSplit,它将以较小的内存占用量顺利运行,生成 100,000 个文件,每个文件包含 10 个元素。此外,它将在启动时立即开始生成输出文件。

如果您运行以下 JsonSplitNaive 程序,该程序一次性读取整个 JSON 文档,那么它显然会在很长一段时间内不执行任何操作,然后因 OutOfMemoryError< 中止.

import java.io.*;
import java.util.*;
import org.json.*;

public class JsonSplitNaive {

/*
* Naive version - do not use, will fail with OutOfMemoryError for
* huge inputs.
*/

private static final int BATCH_SIZE = 10;

public static void flushFile(List<Object> objects, int d) throws Exception {
try (FileOutputStream output = new FileOutputStream("split-" + d
+ ".json");
Writer writer = new OutputStreamWriter(output, "UTF-8")) {
JSONArray jsonArray = new JSONArray(objects);
jsonArray.write(writer);
}
}

public static void main(String[] args) throws Exception {
int outputIndex = 0;
try (InputStream input = new BufferedInputStream(
new FileInputStream(args[0]))) {
List<Object> jsonObjects = new ArrayList<>();
JSONArray jsonArray = new JSONArray(new JSONTokener(input));
for (int i = 0; i < jsonArray.length(); i++) {
jsonObjects.add(jsonArray.get(i));
// Flush if max objects per file has been reached.
if (jsonObjects.size() == BATCH_SIZE) {
flushFile(jsonObjects, outputIndex);
jsonObjects.clear();
outputIndex++;
}
}
if (!jsonObjects.isEmpty()) {
flushFile(jsonObjects, outputIndex);
}
}
}

}

关于java - 将 JSONArray 拆分为更小的 JSONArray,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28263374/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com