gpt4 book ai didi

MySql 复合索引

转载 作者:行者123 更新时间:2023-11-29 02:22:45 25 4
gpt4 key购买 nike

我们使用 MySql 作为我们的数据库

以下查询在 mysql 表(约 2500 万条记录)上运行。我在这里粘贴了两个查询。查询运行速度太慢,我想知道更好的复合索引是否可以改善这种情况。

知道最好的综合指数是什么吗?

并建议我这些查询是否需要复合索引

第一个查询

    EXPLAIN SELECT log_type,
count(DISTINCT subscriber_id) AS distinct_count,
count(*) as total_count
FROM stats.campaign_logs
WHERE domain = 'xxx'
AND campaign_id='12345'
AND log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED')
AND log_time BETWEEN CONVERT_TZ('2015-02-12 00:00:00','+05:30','+00:00')
AND CONVERT_TZ('2015-02-19 23:59:58','+05:30','+00:00')
GROUP BY log_type

上述查询的解释

+----+-------------+---------------+-------------+--------------------------------------------------------------+--------------------------------+---------+------+-------+------------------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------------+--------------------------------------------------------------+--------------------------------+---------+------+-------+------------------------------------------------------------------------------+
| 1 | SIMPLE | campaign_logs | index_merge | campaign_id_index,domain_index,log_type_index,log_time_index | campaign_id_index,domain_index | 153,153 | NULL | 35683 | Using intersect(campaign_id_index,domain_index); Using where; Using filesort |
+----+-------------+---------------+-------------+--------------------------------------------------------------+--------------------------------+---------+------+-------+------------------------------------------------------------------------------+

第二个查询

SELECT campaign_id
, subscriber_id
, campaign_name
, log_time
, log_type
, message
, UNIX_TIMESTAMP(log_time) AS time
FROM campaign_logs
WHERE domain = 'xxx'
AND log_type = 'EMAIL_OPENED'
ORDER
BY log_time DESC
LIMIT 20;

上述查询的解释

+----+-------------+---------------+-------------+-----------------------------+-----------------------------+---------+------+--------+---------------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------------+-----------------------------+-----------------------------+---------+------+--------+---------------------------------------------------------------------------+
| 1 | SIMPLE | campaign_logs | index_merge | domain_index,log_type_index | domain_index,log_type_index | 153,153 | NULL | 118392 | Using intersect(domain_index,log_type_index); Using where; Using filesort |
+----+-------------+---------------+-------------+-----------------------------+-----------------------------+---------+------+--------+---------------------------------------------------------------------------+

第三个查询

EXPLAIN SELECT *, UNIX_TIMESTAMP(log_time) AS time FROM stats.campaign_logs WHERE domain = 'xxx' AND log_type <> 'EMAIL_SLEEP' AND  subscriber_id = '123' ORDER BY log_time DESC LIMIT 100

上述查询的解释

+----+-------------+---------------+------+-------------------------------------------------+---------------------+---------+-------+------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+------+-------------------------------------------------+---------------------+---------+-------+------+-----------------------------+
| 1 | SIMPLE | campaign_logs | ref | subscriber_id_index,domain_index,log_type_index | subscriber_id_index | 153 | const | 35 | Using where; Using filesort |
+----+-------------+---------------+------+-------------------------------------------------+---------------------+---------+-------+------+-----------------------------+

如果你想要任何其他细节,我可以在这里提供

更新(2016 年 4 月 22 日):现在我们想在现有表中再添加一列,即节点 ID。一个事件可以有多个节点。无论我们在事件中生成什么报告,我们现在也需要关于各个节点的报告。

例如

SELECT log_type,
count(DISTINCT subscriber_id) AS distinct_count,
count(*) as total_count
FROM stats.campaign_logs
WHERE domain = 'xxx',
AND campaign_id='12345',
AND node_id = '34567',
AND log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED')
AND log_time BETWEEN CONVERT_TZ('2015-02-12 00:00:00','+05:30','+00:00')
AND CONVERT_TZ('2015-02-19 23:59:58','+05:30','+00:00')
GROUP BY log_type

CREATE TABLE `camp_logs` (
`domain` varchar(50) DEFAULT NULL,
`campaign_id` varchar(50) DEFAULT NULL,
`subscriber_id` varchar(50) DEFAULT NULL,
`message` varchar(21000) DEFAULT NULL,
`log_time` datetime DEFAULT NULL,
`log_type` varchar(50) DEFAULT NULL,
`level` varchar(50) DEFAULT NULL,
`campaign_name` varchar(500) DEFAULT NULL,
KEY `subscriber_id_index` (`subscriber_id`),
KEY `log_type_index` (`log_type`),
KEY `log_time_index` (`log_time`),
KEY `campid_domain_logtype_logtime_subid_index` (`campaign_id`,`domain`,`log_type`,`log_time`,`subscriber_id`),
KEY `domain_logtype_logtime_index` (`domain`,`log_type`,`log_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |

SIZE 问题

由于我们有两个复合索引,索引文件增长很快。以下是表格当前统计信息。数据大小:30 GB索引大小:35 GB

对于关于 node_id 的报告,我们要更新我们现有的复合索引

来自

KEY `campid_domain_logtype_logtime_subid_index` (`campaign_id`,`domain`,`log_type`,`log_time`,`subscriber_id`),

KEY `campid_domain_logtype_logtime_subid_nodeid_index` (`campaign_id`,`domain`,`log_type`,`log_time`,`subscriber_id`,`node_id`)

您能否为事件和节点级别的报告建议合适的复合索引。

谢谢

最佳答案

这是您的第一个查询:

SELECT A.log_type, count(*) as distinct_count, sum(A.total_count) as total_count
from (SELECT log_type, count(subscriber_id) as total_count
FROM stats.campaign_logs
WHERE domain = 'xxx' AND campaign_id = '12345' AND
log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED') AND
DATE(CONVERT_TZ(log_time,'+00:00','+05:30')) BETWEEN DATE('2015-02-12 00:00:00') AND DATE('2015-02-19 23:59:58')
GROUP BY subscriber_id,log_type) A
GROUP BY A.log_type;

最好写成:

      SELECT log_type, count(DISTINCT subscriber_id) as total_count
FROM stats.campaign_logs
WHERE domain = 'xxx' AND campaign_id = '12345' AND
log_type IN ('EMAIL_SENT', 'EMAIL_CLICKED', 'EMAIL_OPENED', 'UNSUBSCRIBED') AND
DATE(CONVERT_TZ(log_time, '+00:00', '+05:30')) BETWEEN DATE('2015-02-12 00:00:00') AND DATE('2015-02-19 23:59:58')
GROUP BY log_type;

最好的索引可能是:campaign_logs(domain, campaign_id, log_type, log_time, subscriber_id)。这是查询的覆盖索引。前三个键应用于 where 过滤。

关于MySql 复合索引,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28632173/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com