gpt4 book ai didi

mysql - 在 mysql 中设置正确的 innodb_log_file_size

转载 作者:IT老高 更新时间:2023-10-29 00:17:03 24 4
gpt4 key购买 nike

我们今天运行了一个更改表,它关闭了数据库。我们故障转移到从服务器,在事后分析中,我们在 mysql error.log

中发现了这一点
InnoDB: ERROR: the age of the last checkpoint is 90608129,
InnoDB: which exceeds the log group capacity 90593280.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

这个错误是真的,因为我们正在处理一个包含 BLOB 数据类型的非常大的表。

我们在网上找到的best answer

To solve it, you need to stop MySQL cleanly (very important), delete the existing InnoDB log files (probably lb_logfile* in your MySQL data directory, unless you've moved them), then adjust the innodb_log_file_size to suit your needs, and then start MySQL again. This article from the MySQL performance blog might be instructive.

在评论中

Yes, the database server will effectively hang for any updates to InnoDB tables when the log fills up. It can cripple a site.

根据我们当前的(默认)innodb_log_file_size 48mb,我猜这是怎么回事?

SHOW GLOBAL VARIABLES LIKE '%innodb_log%';
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| innodb_log_buffer_size | 8388608 |
| innodb_log_compressed_pages | ON |
| innodb_log_file_size | 50331648 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
+-----------------------------+----------+

所以,这让我想到了两个尖锐的问题和一个开放式的问题:

  1. 我们如何确定最大的行,以便我们可以将 innodb_log_file_size 设置为更大?
  2. 第 1 步中的操作会产生什么后果?我读到过关于较大日志的较长恢复时间。
  3. 考虑到我们有一个大表(650k 行,6169.8GB),其中包含不受限制的可变长度 BLOB 字段,我还有什么需要担心的迁移。

我们正在运行 mysql 5.6,这是我们的 my.cnf

[mysqld]

#defaults
basedir = /opt/mysql/server-5.6
datadir = /var/lib/mysql
port = 3306
socket = /var/run/mysqld/mysqld.sock
tmpdir = /tmp
bind-address = 0.0.0.0

#logs
log_error = /var/log/mysql/error.log
expire_logs_days = 4
slow_query_log = on
long_query_time = 1


innodb_buffer_pool_size = 11G

#http://stackoverflow.com/a/10866836/182484
collation-server = utf8_bin
init-connect ='SET NAMES utf8'
init_connect ='SET collation_connection = utf8_bin'
character-set-server = utf8
max_allowed_packet = 64M
skip-character-set-client-handshake

#cache
query_cache_size = 268435456
query_cache_type = 1
query_cache_limit = 1048576
```

作为下面列出的建议的后续行动,我开始调查相关表格的文件大小。我运行了一个脚本,将三个 BLOB 字段的组合字节大小写入一个名为 pen_sizes 的表中。这是获得最大字节大小的结果:

select pen_size as bytes,·
pen_size / 1024 / 1024 as mb,·
pen_id from pen_sizes
group by pen_id
order by bytes desc
limit 40

+---------+------------+--------+
| bytes | mb | pen_id |
+---------+------------+--------+
| 3542620 | 3.37850571 | 84816 |
| 3379107 | 3.22256756 | 74796 |
| 3019237 | 2.87936878 | 569726 |
| 3019237 | 2.87936878 | 576506 |
| 3019237 | 2.87936878 | 576507 |
| 2703177 | 2.57795048 | 346965 |
| 2703177 | 2.57795048 | 346964 |
| 2703177 | 2.57795048 | 93706 |
| 2064807 | 1.96915340 | 154627 |
| 2048592 | 1.95368958 | 237514 |
| 2000695 | 1.90801144 | 46798 |
| 1843034 | 1.75765419 | 231988 |
| 1843024 | 1.75764465 | 230423 |
| 1820514 | 1.73617744 | 76745 |
| 1795494 | 1.71231651 | 650208 |
| 1785353 | 1.70264530 | 74912 |
| 1754059 | 1.67280102 | 444932 |
| 1752609 | 1.67141819 | 76607 |
| 1711492 | 1.63220596 | 224574 |
| 1632405 | 1.55678272 | 76188 |
| 1500157 | 1.43066120 | 77256 |
| 1494572 | 1.42533493 | 137184 |
| 1478692 | 1.41019058 | 238547 |
| 1456973 | 1.38947773 | 181379 |
| 1433240 | 1.36684418 | 77631 |
| 1421452 | 1.35560226 | 102930 |
| 1383872 | 1.31976318 | 77627 |
| 1359317 | 1.29634571 | 454109 |
| 1355701 | 1.29289722 | 631811 |
| 1343621 | 1.28137684 | 75256 |
| 1343621 | 1.28137684 | 75257 |
| 1334071 | 1.27226925 | 77626 |
| 1327063 | 1.26558590 | 129731 |
| 1320627 | 1.25944805 | 636914 |
| 1231918 | 1.17484856 | 117269 |
| 1223975 | 1.16727352 | 75103 |
| 1220233 | 1.16370487 | 326462 |
| 1220233 | 1.16370487 | 326463 |
| 1203432 | 1.14768219 | 183967 |
| 1200373 | 1.14476490 | 420360 |
+---------+------------+--------+

这让我相信平均行大小比建议的 10 更接近 1mb。也许我之前列出的表大小也包括索引?

我跑了

SELECT table_name AS "Tables", 
round(((data_length + index_length) / 1024 / 1024), 2) "Size in MB"
FROM information_schema.TABLES
WHERE table_schema = 'codepen'

+-------------------+------------+
| Tables | Size in MB |
+-------------------+------------+
...snip
| pens | 6287.89 |
...snip

最佳答案

<强>0。初步信息

您的设置:

innodb_log_file_size = 50331648
innodb_log_files_in_group = 2

因此您的“log group capacity”= 2 x 50331648 = 96 MB

<强>1。如何确定最大行

没有直接的方法。但是可以很容易地计算出给定行的大小 based on these tables (压缩对我们来说应该无关紧要,如果,正如我所假设的,日志文件中的行没有被压缩)。

<强>2。 innodb_log_file_size

的影响

Reference manual :

The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. Larger log files also make crash recovery slower, although improvements to recovery performance in MySQL 5.5 and higher make the log file size less of a consideration.

3.还有什么需要担心的

6169.8 GB/650k 行 = 平均每行约 10 MB如果您打算在事务性、多用户情况下使用数据库,这本身就是一个严重的问题。考虑将您的 BLOB 作为文件存储在数据库之外。或者,至少,将它们存储在单独的 MyISAM(非事务)表中。

关于mysql - 在 mysql 中设置正确的 innodb_log_file_size,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/18806377/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com