gpt4 book ai didi

max-allowed-packet error in sql while fetching data from Google API and batch insert(从Google API和Batch Insert获取数据时,SQL中的最大允许数据包错误)

转载 作者:bug小助手 更新时间:2023-10-25 19:26:58 29 4
gpt4 key购买 nike



I have this error while inserting batch in mysql:

在MySQL中插入批次时出现以下错误:


java.sql.BatchUpdateException: Packet for query is too large (70,566,811 > 67,108,864). You can change this value on the server by setting the 'max_allowed_packet' variable.

I need to read the insights from a Google-API day by day and save them using fetchMultiDailyMetricsTimeSeries Method. Everyday, the program should get executed to read the info related to 3 days ago,but I discovered that Google is not updated for last three days and Now I am obligated to update the info in my table for at least 3 months. Here arise the problem that we receive loads of data on a field typed JSON in mysql returned from API. So, I cannot change the table structure since there are other programs reading from this table and this problem will be no longer valid after updating the table.

我需要每天从Google-API中读取洞察力,并使用fetchMultiDailyMetricsTimeSeries方法保存它们。每天,程序应该被执行来读取3天前的相关信息,但我发现Google已经3天没有更新了,现在我有义务至少3个月更新我表中的信息。这里出现的问题是,我们在从API返回的MySQL中的JSON类型的字段上接收到大量数据。因此,我不能更改表结构,因为有其他程序正在读取该表,并且在更新表之后,这个问题将不再有效。


The query is a simple Insert that I need to to consider ON DUPLICATE UPDATE. In fact, I cannot change the max-allowed-pocket variable (The DBA does not want to change it).

该查询是一个简单的插入,我需要在重复更新时考虑它。事实上,我不能更改最大允许口袋变量(DBA不想更改它)。


I have split my list in a way that for every 500 records, the program goes to do the insert. However, it seems that it is not enough and the received data is too large even in 500 records limit. right now, I am trying to make the break-point smaller and also fetch the info for a shorter loop of days like 10 so that there would not be a problem but this would be too slow since I need to perform these for all the clients (more than 20) and for 90 days.

我拆分了我的列表,对于每500条记录,程序都会执行插入。然而,这似乎是不够的,即使在500条记录的限制下,接收的数据量也太大了。现在,我正在尝试使断点更小,并获取更短的信息循环的天数,如10,这样就不会有问题,但这将是太慢,因为我需要为所有的客户(超过20)和90天执行这些。


I use JAVA and JDBC Prepared Statement.

我使用Java和JDBC的预准备语句。


What can I do else?

我还能做什么?


更多回答

Nothing, really. If you bump into the packet size limot, then either you have to increase the limit or dexrease the size of the data you send.

没什么,真的。如果遇到数据包大小限制,则必须增加限制或减少发送的数据大小。

Is your organization financially impacted by this inability to process data someone is providing for your consumption? If so, share this reality with the DBA and maybe he will consider taking it to the limit of 1G. If RAM is available on the server. HTOP or TOP provides clues on RAM availability and usage for most Linux type systems.. Windows has similar clues.

您的组织是否因无法处理某人提供给您使用的数据而受到财务影响?如果是这样,请与DBA分享这个现实,也许他会考虑将其限制为1G。如果服务器上有可用的RAM。HTOP或TOP为大多数Linux类型的系统提供了RAM可用性和使用情况的线索。Windows也有类似的线索。

@WilsonHauck Thank you, This problem does not happen too often, after we migrated to new google API for reading the insights, I considered like the old API a delay of 72 hours but seemingly new API is more than 5 days late so I am obligated to update my database values for at least 3 months which leads to this much info. Right now I have just broke down the number of records to save and once updated for all the clients there wouldn't be a problem.

@WilsonHauck谢谢,这个问题并不经常发生,在我们迁移到新的Google API以阅读见解后,我认为像旧的API一样延迟了72小时,但似乎新的API延迟了5天以上,所以我有义务更新我的数据库值至少3个月,这导致了这么多的信息。现在,我刚刚分解了要保存的记录的数量,一旦更新了所有客户端的记录,就不会有问题。

The very best always. For performance tuning, view profile info.

永远是最好的。有关性能调整,请查看配置文件信息。

typo: the hard limit is 1GB.

打字错误:硬限制是1 GB。

优秀答案推荐
更多回答

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com