gpt4 book ai didi

sql - 同时使用 group by 和 order by 查询时间慢

转载 作者:行者123 更新时间:2023-11-29 12:28:05 27 4
gpt4 key购买 nike

我有一个查询,我需要按列对结果进行排序。如果我按 id 订购,它工作得非常快(2.8 毫秒)。但是,如果我尝试按任何其他列(甚至索引)排序,查询执行时间会增加(800 毫秒)。我可以在 EXPLAIN 中看到,按 id 排序正在使用索引扫描,如果我按 reg_date 排序,它会进行序列扫描。

这是我的索引。我还重新索引了表格。

+--------------------+------------------------------------------------------------------------+
| indexname | indexdef |
+--------------------+------------------------------------------------------------------------+
| pk_users | CREATE UNIQUE INDEX pk_users ON public.users USING btree (id) |
| idx_users_reg_date | CREATE INDEX idx_users_end_date ON public.users USING btree (reg_date) |
+--------------------+------------------------------------------------------------------------+

如果我按 ID 排序,则执行时间为 2.601 毫秒

select
users.id,
users.full_name,
sum(user_comments.badges) as badges,
count(user_comments) as comment_count
from
users
left join user_comments
on users.id = user_comments.user_id
group by users.id
order by users.id
limit 10

但如果我按 users.reg_date 列(有索引)排序,它大约是 818.336 毫秒

select
users.id,
users.full_name,
sum(user_comments.badges) as badges,
count(user_comments) as comment_count
from
users
left join user_comments
on users.id = user_comments.user_id
group by users.id
order by users.reg_date
limit 10;
QUERY PLAN
Limit (cost=73954.85..73954.88 rows=10 width=328) (actual time=614.913..614.914 rows=10 loops=1)
Buffers: shared hit=9 read=25307, temp read=6671 written=6671
-> Sort (cost=73954.85..74216.20 rows=104539 width=328) (actual time=614.912..614.912 rows=10 loops=1)
Sort Key: users.reg_date
Sort Method: top-N heapsort Memory: 25kB
Buffers: shared hit=9 read=25307, temp read=6671 written=6671
-> GroupAggregate (cost=67941.35..71695.80 rows=104539 width=328) (actual time=432.031..598.345 rows=104539 loops=1)
Buffers: shared hit=6 read=25307, temp read=6671 written=6671
-> Merge Left Join (cost=67941.35..69866.37 rows=104539 width=328) (actual time=432.019..535.760 rows=161688 loops=1)
Merge Cond: (users.id = user_comments.user_id)
Buffers: shared hit=6 read=25307, temp read=6671 written=6671
-> Sort (cost=33360.14..33621.49 rows=104539 width=8) (actual time=267.480..292.054 rows=104539 loops=1)
Sort Key: users.id
Sort Method: external merge Disk: 1408kB
Buffers: shared hit=4 read=22164, temp read=181 written=181
-> Seq Scan on users (cost=0.00..23213.39 rows=104539 width=8) (actual time=0.012..202.277 rows=104539 loops=1)
Buffers: shared hit=4 read=22164
-> Materialize (cost=34581.21..34981.87 rows=80133 width=324) (actual time=164.533..205.544 rows=80155 loops=1)
Buffers: shared hit=2 read=3143, temp read=6490 written=6490
-> Sort (cost=34581.21..34781.54 rows=80133 width=324) (actual time=164.525..193.679 rows=80155 loops=1)
Sort Key: user_comments.user_id
Sort Method: external merge Disk: 24048kB
Buffers: shared hit=2 read=3143, temp read=6490 written=6490
-> Seq Scan on user_comments (cost=0.00..3946.33 rows=80133 width=324) (actual time=0.028..48.802 rows=80155 loops=1)
Buffers: shared hit=2 read=3143
Total runtime: 619.567 ms

最佳答案

正如其中一条评论中提到的,磁盘上有排序,“排序方法:外部合并磁盘:24048kB”。

这应该尽可能避免,所以如果你有足够的内存,可以增加 work_mem。 “4MB”的默认值相当小。

请记住,如果您将 work_mem 设置得很大并且同时有很多查询在工作,则可能会耗尽系统内存。

要在日志文件中查看临时文件使用情况,您还应该设置“log_temp_files = 0”

关于sql - 同时使用 group by 和 order by 查询时间慢,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55806699/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com