gpt4 book ai didi

postgresql - 缓慢的 Postgres 9.3 查询

转载 作者:行者123 更新时间:2023-11-29 12:03:32 24 4
gpt4 key购买 nike

我想弄清楚是否可以加快对存储电子邮件的数据库的两个查询。这是表格:

\d messages;
Table "public.messages"
Column | Type | Modifiers
----------------+---------+-------------------------------------------------------
id | bigint | not null default nextval('messages_id_seq'::regclass)
created | bigint |
updated | bigint |
version | bigint |
threadid | bigint |
userid | bigint |
groupid | bigint |
messageid | text |
date | bigint |
num | bigint |
hasattachments | boolean |
placeholder | boolean |
compressedmsg | bytea |
revcount | bigint |
subject | text |
isreply | boolean |
likes | bytea |
isspecial | boolean |
pollid | bigint |
username | text |
fullname | text |
Indexes:
"messages_pkey" PRIMARY KEY, btree (id)
"idx_unique_message_messageid" UNIQUE, btree (groupid, messageid)
"idx_unique_message_num" UNIQUE, btree (groupid, num)
"idx_group_id" btree (groupid)
"idx_message_id" btree (messageid)
"idx_thread_id" btree (threadid)
"idx_user_id" btree (userid)

SELECT relname, relpages, reltuples::numeric, pg_size_pretty(pg_table_size(oid)) FROM pg_class WHERE oid='messages'::regclass; 的输出

 relname  | relpages | reltuples | pg_size_pretty
----------+----------+-----------+----------------
messages | 1584913 | 7337880 | 32 GB

一些可能相关的 postgres 配置值:

shared_buffers = 1536MB
effective_cache_size = 4608MB
work_mem = 7864kB
maintenance_work_mem = 384MB

这里是解释分析输出:

explain analyze SELECT * FROM messages WHERE groupid=1886 ORDER BY id ASC LIMIT 20 offset 4440;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=479243.63..481402.39 rows=20 width=747) (actual time=14167.374..14167.408 rows=20 loops=1)
-> Index Scan using messages_pkey on messages (cost=0.43..19589605.98 rows=181490 width=747) (actual time=14105.172..14167.188 rows=4460 loops=1)
Filter: (groupid = 1886)
Rows Removed by Filter: 2364949
Total runtime: 14167.455 ms
(5 rows)

第二个查询:

explain analyze SELECT * FROM messages WHERE groupid=1886 ORDER BY created ASC LIMIT 20 offset 4440;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=538650.72..538650.77 rows=20 width=747) (actual time=671.983..671.992 rows=20 loops=1)
-> Sort (cost=538639.62..539093.34 rows=181490 width=747) (actual time=670.680..671.829 rows=4460 loops=1)
Sort Key: created
Sort Method: top-N heapsort Memory: 7078kB
-> Bitmap Heap Scan on messages (cost=7299.11..526731.31 rows=181490 width=747) (actual time=84.975..512.969 rows=200561 loops=1)
Recheck Cond: (groupid = 1886)
-> Bitmap Index Scan on idx_unique_message_num (cost=0.00..7253.73 rows=181490 width=0) (actual time=57.239..57.239 rows=203423 loops=1)
Index Cond: (groupid = 1886)
Total runtime: 672.787 ms
(9 rows)

这是在 SSD、8GB Ram 实例上,平均负载通常在 0.15 左右。

我绝对不是专家。这是数据散布在整个磁盘中的情况吗?使用 CLUSTER 是我唯一的解决方案吗?

我不明白的一件事是为什么它使用 idx_unique_message_num 作为第二个查询的索引。为什么按 ID 排序这么慢?

最佳答案

如果有很多 groupid=1886 的记录(来自评论:有 200,563),要在已排序的行子集的 OFFSET 处获取记录,将需要排序(或等效的堆算法)很慢。

这可以通过添加索引来解决。在这种情况下,一个在 (groupid,id) 上,另一个在 (groupid,created) 上。

来自评论:这确实有帮助,将运行时间缩短到 5 毫秒到 10 毫秒。

关于postgresql - 缓慢的 Postgres 9.3 查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41069637/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com