gpt4 book ai didi

mysql - java.sql.BatchUpdateException : transaction too large, 长度:300200

转载 作者:行者123 更新时间:2023-11-29 10:33:39 28 4
gpt4 key购买 nike

spark2.2 中使用 jdbc 驱动写入 TiDB 时出现此错误:

java.sql.BatchUpdateException: transaction too large, len:300200

在以下情况下出现错误:
选择整个表格

在以下条件下没有收到错误:
选择限制为 10000000;

没有任何线索

最佳答案

复制自TiDB document

The error message transaction too large is displayed.

As distributed transactions need to conduct two-phase commit and the bottom layer performs Raft replication, if a transaction is very large, the commit process would be quite slow and the following Raft replication flow is thus struck. To avoid this problem, we limit the transaction size:

Each Key-Value entry is no more than 6MB The total number of Key-Value entry is no more than 300,000 rows The total size of Key-Value entry is no more than 100MB There are similar limits on Google Cloud Spanner.

Solution:

When you import data, insert in batches and it'd be better keep the number of one batch within 10,000 rows.

As for insert and select, you can open the hidden parameter set @@session.tidb_batch_insert=1;, and insert will execute large transactions in batches. In this way, you can avoid the timeout caused by large transactions, but this may lead to the loss of atomicity. An error in the process of execution leads to partly inserted transaction. Therefore, use this parameter only when necessary, and use it in session to avoid affecting other statements. When the transaction is finished, use set @@session.tidb_batch_insert=0 to close it.

As for delete and update, you can use limit plus circulation to operate.

关于mysql - java.sql.BatchUpdateException : transaction too large, 长度:300200,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46903860/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com