gpt4 book ai didi

mysql - 如何改进 "sending data"时间长的查询

转载 作者:行者123 更新时间:2023-11-30 21:48:33 31 4
gpt4 key购买 nike

有问题的查询是

select count(*)
from t_fault tf
where err_status = 1 and
report_type = 2 and
solve_status = 2 and
fault_code = 8 and
tf.record_time between '2018-01-12 00:00:00' and '2018-01-18 23:59:59';

此查询的配置文件数据是

+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000070 |
| checking permissions | 0.000005 |
| Opening tables | 0.000014 |
| init | 0.000021 |
| System lock | 0.000006 |
| optimizing | 0.000011 |
| statistics | 0.000080 |
| preparing | 0.000017 |
| executing | 0.000002 |
| Sending data | 0.500267 |
| end | 0.000011 |
| query end | 0.000006 |
| closing tables | 0.000011 |
| freeing items | 0.000086 |
| cleaning up | 0.000012 |
+----------------------+----------+

“发送数据”执行大约 0.5 秒,我认为这是低性能,我不能做得更好。

也许索引不正确。

下面是t_fault表的DDL,包含主键和索引

 CREATE TABLE `t_fault` (
`id` varchar(36) NOT NULL,
`pile_id` varchar(19) DEFAULT NULL,
`report_type` int(2) DEFAULT '0',
`fault_code` int(2) DEFAULT NULL,
`err_code` int(2) DEFAULT NULL,
`err_status` int(2) DEFAULT NULL,
`solve_status` int(2) DEFAULT NULL,
`create_time` datetime DEFAULT NULL,
`record_time` datetime DEFAULT NULL,
`fault_type` int(8) DEFAULT NULL,
`update_time` datetime DEFAULT NULL,
`solve_time` datetime DEFAULT NULL,
`operator_id` varchar(19) DEFAULT NULL,
`inter_no` smallint(6) DEFAULT '0',
PRIMARY KEY (`id`),
KEY `i_fault_common` (`err_status`,`report_type`,`solve_status`,`fault_code`,`record_time`),
KEY `i_fault_pile_common` (`pile_id`,`err_status`,`report_type`,`solve_status`,`fault_code`,`record_time`),
KEY `i_fault_operator_common` (`operator_id`,`err_status`,`report_type`,`solve_status`,`fault_code`,`record_time`),
KEY `i_fault_operator_pile_common` (`operator_id`,`pile_id`,`err_status`,`report_type`,`solve_status`,`fault_code`,`record_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

这个表包含 8,944,637 行。当我执行以下 sql 时

explain select count(*) from t_fault tf where err_status = 1 and report_type = 2 and solve_status =2 and fault_code =8 and tf.record_time between '2018-01-12 00:00:00' and '2018-01-18 23:59:59';

mysql打印这个

+----+-------------+-------+-------+----------------+----------------+---------+------+---------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+----------------+----------------+---------+------+---------+--------------------------+
| 1 | SIMPLE | tf | range | i_fault_common | i_fault_common | 26 | NULL | 1584048 | Using where; Using index |
+----+-------------+-------+-------+----------------+----------------+---------+------+---------+--------------------------+

那么我怎样才能通过一些技巧来加速有问题的查询的“发送数据”。

最佳答案

and this table contains 8,944,637 rows.

你的 table 好重。我建议看看 RANGE Partitioning .

关于mysql - 如何改进 "sending data"时间长的查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48334113/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com