gpt4 book ai didi

mysql - SELECT COUNT with JOIN optimization for tables with > 100M rows

转载 作者:可可西里 更新时间:2023-11-01 08:08:47 25 4
gpt4 key购买 nike

我有以下问题

SELECT SUBSTRING(a0_.created_date FROM 1 FOR 10) AS sclr_0, 
COUNT(1) AS sclr_1
FROM applications a0_ INNER JOIN
package_codes p1_ ON a0_.id = p1_.application_id
WHERE a0_.created_date BETWEEN '2019-01-01' AND '2020-01-01' AND
p1_.type = 'Package 1'
GROUP BY sclr_0

--- 编辑 ---

你们中的大多数人都关注 GROUP BY 和 SUBSTRING,但这不是问题的根源。

以下查询具有相同的执行时间:

SELECT COUNT(1) AS sclr_1 
FROM applications a0_ INNER JOIN
package_codes p1_ ON a0_.id = p1_.application_id
WHERE a0_.created_date BETWEEN '2019-01-01' AND '2020-01-01' AND
p1_.type = 'Package 1'

--- 编辑 2 ---

在 applications.created_date 上添加索引并强制查询使用指定的索引后,@DDS 建议执行时间下降到 ~750ms

当前查询看起来像:

SELECT SUBSTRING(a0_.created_date FROM 1 FOR 10) AS sclr_0, 
COUNT(1) AS sclr_1
FROM applications a0_ USE INDEX (applications_created_date_idx) INNER JOIN
package_codes p1_ USE INDEX (PRIMARY, UNIQ_70A9C6AA3E030ACD, package_codes_type_idx) ON a0_.id = p1_.application_id
WHERE a0_.created_date BETWEEN '2019-01-01' AND '2020-01-01' AND
p1_.type = 'Package 1'
GROUP BY sclr_0

--- 编辑 3 ---

我发现在查询中使用过多的索引可能会导致在某些情况下MySQL会使用非最优索引,因此最终查询应该如下所示:

SELECT SUBSTRING(a0_.created_date FROM 1 FOR 10) AS sclr_0, 
COUNT(1) AS sclr_1
FROM applications a0_ USE INDEX (applications_created_date_idx) INNER JOIN
package_codes p1_ USE INDEX (package_codes_application_idx) ON a0_.id = p1_.application_id
WHERE a0_.created_date BETWEEN '2019-01-01' AND '2020-01-01' AND
p1_.type = 'Package 1'
GROUP BY sclr_0

--- 结束编辑---

package_codes 包含超过 100.000.000 条记录。

应用程序包含超过 250.000 条记录。

查询需要 2 分钟才能得到结果。有什么办法可以优化吗?我坚持使用 MySQL 5.5。

表格:

CREATE TABLE `applications` (
`id` int(11) NOT NULL,
`created_date` datetime NOT NULL,
`name` varchar(64) COLLATE utf8mb4_unicode_ci NOT NULL,
`surname` varchar(64) COLLATE utf8mb4_unicode_ci NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

ALTER TABLE `applications`
ADD PRIMARY KEY (`id`),
ADD KEY `applications_created_date_idx` (`created_date`);

ALTER TABLE `applications`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT;
CREATE TABLE `package_codes` (
`id` int(11) NOT NULL,
`application_id` int(11) DEFAULT NULL,
`created_date` datetime NOT NULL,
`type` varchar(50) COLLATE utf8mb4_unicode_ci NOT NULL,
`code` varchar(50) COLLATE utf8mb4_unicode_ci NOT NULL,
`disabled` tinyint(1) NOT NULL DEFAULT '0',
`meta_data` longtext COLLATE utf8mb4_unicode_ci
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

ALTER TABLE `package_codes`
ADD PRIMARY KEY (`id`),
ADD UNIQUE KEY `UNIQ_70A9C6AA3E030ACD` (`application_id`),
ADD KEY `package_codes_code_idx` (`code`),
ADD KEY `package_codes_type_idx` (`type`),
ADD KEY `package_codes_application_idx` (`application_id`),
ADD KEY `package_codes_code_application_idx` (`code`,`application_id`);

ALTER TABLE `package_codes`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT;

ALTER TABLE `package_codes`
ADD CONSTRAINT `FK_70A9C6AA3E030ACD` FOREIGN KEY (`application_id`) REFERENCES `applications` (`id`);

最佳答案

我的建议是避免这种情况:

SELECT SUBSTRING(a0_.created_date FROM 1 FOR 10) AS sclr_0, 
[...]
GROUP BY sclr_0

因为每次 dbms 都会“重新计算”该字段并且不能在其上使用索引,如果您将这些数据放在它自己的列中并在其上创建索引,您的性能应该会提高

或者,至少,使用 date_part 函数,这样 mysql 就可以设法使用它的索引(显然你应该在 application.created_date 上添加一个索引)

SELECT SUBSTRING(a0_.created_date FROM 1 FOR 10) AS sclr_0, COUNT(1) AS sclr_1 
FROM applications a0_ INNER JOIN
package_codes p1_ ON (a0_.id = p1_.application_id and a0_.created_date
BETWEEN '2019-01-01' AND '2020-01-01' and p1_.type = 'Package 1')
FORCE INDEX (date_index, type_index)
Group by date(a0_.created_date)

另一个优化是将条件“推送”到“on”子句,以便 mysql 在连接之前“过滤”数据 -> 跨更少的行执行连接

编辑:这是在日期上创建索引

CREATE INDEX date_index ON application(created_date);

如果类型比日期多得多,则应考虑将索引放在类型上。

CREATE INDEX type_index ON package_codes(type);

[编辑 2]请发布

的结果
select count(distinct date(a0_.created_date)) as N_DATES, count(distinct type)as N_TYPES
FROM applications a0_ INNER JOIN
package_codes p1_ ON a0_.id = p1_.application_id

只是对女巫指数有想法会更有选择性

有用 link使用 MySQL 进行索引优化

关于mysql - SELECT COUNT with JOIN optimization for tables with > 100M rows,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54530340/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com