gpt4 book ai didi

超过 100000000 行的 MySQL 总是因页面损坏而崩溃

转载 作者:行者123 更新时间:2023-11-29 02:44:02 26 4
gpt4 key购买 nike

大家: 我的英语不好,我需要一些帮助来解决我的问题。

环境:CentOS Linux release 7.2.1511 (Core) mysqld 版本 5.7.19 或 5.7.16 文件系统:xfs

我向表中插入数据,当表行超过 100000000 时,mysqld 因页面损坏而崩溃。

错误日志如下:

10:23:15 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

key_buffer_size=33554432
read_buffer_size=4194304
max_used_connections=4
max_threads=600
thread_count=4
connection_count=2

It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 3727174 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f8f5c0ec8e0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7f8fa8086ea8 thread_stack 0x80000
/usr/local/mysql/bin/mysqld(my_print_stacktrace+0x35)[0xf45e05]
/usr/local/mysql/bin/mysqld(handle_fatal_signal+0x4a4)[0x7cd464]
/lib64/libpthread.so.0(+0xf100)[0x7f90914fd100]
/usr/local/mysql/bin/mysqld(_Z20rec_get_offsets_funcPKhPK12dict_index_tPmmPP16mem_block_info_t+0x21)[0x10dc551]
/usr/local/mysql/bin/mysqld[0x11001d8]
/usr/local/mysql/bin/mysqld(_Z23row_merge_build_indexesP5trx_tP12dict_table_tS2_bPP12dict_index_tPKmmP5TABLEPK8dtuple_tS7_mR13ib_sequence_tbP16ut_stage_alter_tPK16dict_add_v_col_tS9_+0x4bd)[0x1102f8d]
/usr/local/mysql/bin/mysqld(_ZN11ha_innobase19inplace_alter_tableEP5TABLEP18Alter_inplace_info+0x304)[0x10595f4]
/usr/local/mysql/bin/mysqld[0xd7e50c]
/usr/local/mysql/bin/mysqld(_Z17mysql_alter_tableP3THDPKcS2_P24st_ha_create_informationP10TABLE_LISTP10Alter_info+0x39de)[0xd8242e]
/usr/local/mysql/bin/mysqld(_Z21mysql_execute_commandP3THDb+0xef5)[0xd13d15]
/usr/local/mysql/bin/mysqld(_Z11mysql_parseP3THDP12Parser_state+0x3a5)[0xd18245]
/usr/local/mysql/bin/mysqld(_Z16dispatch_commandP3THDPK8COM_DATA19enum_server_command+0x11af)[0xd1945f]
/usr/local/mysql/bin/mysqld(_Z10do_commandP3THD+0x194)[0xd1a324]
/usr/local/mysql/bin/mysqld(handle_connection+0x29c)[0xdea0fc]
/usr/local/mysql/bin/mysqld(pfs_spawn_thread+0x174)[0xfbdbf4]
/lib64/libpthread.so.0(+0x7dc5)[0x7f90914f5dc5]
/lib64/libc.so.6(clone+0x6d)[0x7f908ffb221d]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7f8f5c0008f0): is an invalid pointer
Connection ID (thread ID): 1014
Status: NOT_KILLED

当我使用innodb_force_recovery = 1恢复实例时,我可以运行

select * from sbtest1 limit 100 ; 
select * from sbtest1 order by id desc limit 100;

但是当我运行 select count(1) from sbtest1 或 select * from sbtest1 或 mysqldump 转储数据时,实例崩溃并出现错误页面损坏。

我的表结构

CREATE TABLE `login_log_1` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '',
`user_id` bigint(20) NOT NULL COMMENT '',
`login_ip` varchar(100) DEFAULT NULL COMMENT '',
`operation_type` tinyint(4) DEFAULT '6' COMMENT ' ',
`type` tinyint(4) DEFAULT NULL COMMENT ',
`status` int(10) DEFAULT '1' COMMENT ':',
`interface_name` varchar(100) DEFAULT NULL COMMENT '',
`interface_code` varchar(100) DEFAULT NULL COMMENT '',
`take_time` bigint(20) DEFAULT NULL COMMENT '',
`version_code` int(10) DEFAULT NULL COMMENT '',
`remark` varchar(128) DEFAULT NULL COMMENT '',
`login_time` datetime DEFAULT NULL COMMENT '',
`device_id` varchar(64) DEFAULT '' COMMENT '',
PRIMARY KEY (`id`,`user_id`),
KEY `IDX_USERID` (`user_id`) USING BTREE,
KEY `IDX_LOGIN_TIME` (`login_time`)
) ENGINE=InnoDB AUTO_INCREMENT=4468000 DEFAULT CHARSET=utf8
COMMENT='loginlog'

CREATE TABLE `sbtest1` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`k` int(10) unsigned NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(60) NOT NULL DEFAULT '',
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=100000001 DEFAULT CHARSET=utf8mb4 MAX_ROWS=1000000

谁能帮帮我?非常感谢!

最佳答案

此链接http://docs.oracle.com/cd/E37670_01/E37355/html/ol_quoset_xfs.html可能会帮助您消除 xfs QUOTA 限制这一可能的原因。

max_threads = 600 似乎比支持 max_used_connections 4 所需的多得多。

pfs_spawn_thread 是错误日志中最后记录的项目之一。

关于超过 100000000 行的 MySQL 总是因页面损坏而崩溃,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45808337/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com