gpt4 book ai didi

postgresql - autovacuum (VACUUM) 是这个 PostgreSQL UPDATE 查询偶尔需要几个小时才能完成运行的原因吗?

转载 作者:行者123 更新时间:2023-12-04 00:18:21 26 4
gpt4 key购买 nike

这个 sql 查询通常只需要几分钟即可运行:

update import_parts ip
set part_manufacturer_id = pslc.part_manufacturer_id
from parts.part_supplier_line_codes pslc
where trim(lower(ip.line_code)) = trim(lower(pslc.supplier_line_code))
and (ip.status is null or ip.status != '6')
and ip.distributor_id = pslc.distributor_id
and ip.distributor_id = 196;

但我注意到它有时会卡住并被 2 小时 statement_timeout 自动取消。我注意到有几次,当它卡住时,autovacuum 正在运行,并且 autovacuum 也需要很长时间才能完成运行。以下是更新查询和 autovacuum 都在运行并且它们都需要很长时间才能完成运行的一个实例:

auto vacuum and update query

^ 在本例中,autovacuum 在大约 1 小时内完成运行,而更新查询在近 2 小时内完成运行。在其他情况下,更新查询会超过 2 小时 statement_timeout,因此会自动取消。

现在我的问题是,autovacuum (VACUUM) 是更新查询卡住或需要数小时才能完成运行的原因吗?如果是,我该怎么做才能防止更新查询卡住或变得如此缓慢?如果不是,您认为是什么原因导致更新查询卡住或变得如此缓慢?

我们使用的是 PostgreSQL 9.6.15

更新 1

我检查了我们的 RDS 实例是否耗尽了服务器资源。我们的实例大小为 db.t2.medium(2 个 vCPU,4GB RAM,1000 IOPS,存储类型为预置 iops SSD)。

这是过去 3 天的 cloudwatch 指标。请注意,在过去的 3 天内,上面的更新 sql 查询多次卡住。

cpu utilization

freable memory

write iops

更新 2

更新查询和自动清理运行时唯一的事件锁:

active locks

^ 用红色突出显示的锁是由 autovacuum 创建的锁。以绿色突出显示的锁是更新查询创建的锁。

这是更新查询和自动清理运行时所有数据库连接的列表:

database connections list

用红色突出显示的是自动真空。以绿色突出显示的是更新查询。

这是更新查询的 EXPLAIN 结果:

result of explain

parts.part_supplier_line_codes 表仅包含 2758 行。表中没有 2 行或更多行具有相同的 supplier_line_code +distributor_id

import_parts 表包含 1260 万行。

更新 3

这是 EXPLAIN (ANALYZE, BUFFERS) 的结果:

EXPLAIN (ANALYZE, BUFFERS)
update import_parts ip
set part_manufacturer_id = pslc.part_manufacturer_id
from parts.part_supplier_line_codes pslc
where trim(lower(ip.line_code)) = trim(lower(pslc.supplier_line_code))
and (ip.status is null or ip.status != '6')
and ip.distributor_id = pslc.distributor_id
and ip.distributor_id = 196;

Update on import_parts ip (cost=2967312.95..3778109.36 rows=31167172 width=156) (actual time=151475.198..151475.198 rows=0 loops=1)
Buffers: shared hit=62369982 read=453357 dirtied=375348 written=315748, temp read=154212 written=307558
-> Merge Join (cost=2967312.95..3778109.36 rows=31167172 width=156) (actual time=37567.148..84208.239 rows=10326988 loops=1)
Merge Cond: ((btrim(lower((pslc.supplier_line_code)::text))) = (btrim(lower((ip.line_code)::text))))
Buffers: shared hit=94397 read=78007, temp read=154212 written=307558
-> Sort (cost=51.70..52.93 rows=493 width=17) (actual time=9.649..10.039 rows=494 loops=1)
Sort Key: (btrim(lower((pslc.supplier_line_code)::text)))
Sort Method: quicksort Memory: 63kB
Buffers: shared hit=7 read=11
-> Index Scan using index_part_supplier_line_codes_on_distributor_id on part_supplier_line_codes pslc (cost=0.28..29.65 rows=493 width=17) (actual time=2.926..8.677 rows=494 loops=1)
Index Cond: (distributor_id = 196)
Buffers: shared hit=2 read=11
-> Materialize (cost=2967261.25..3030480.67 rows=12643883 width=146) (actual time=37557.491..76400.550 rows=12642995 loops=1)
Buffers: shared hit=94390 read=77996, temp read=154212 written=307558
-> Sort (cost=2967261.25..2998870.96 rows=12643883 width=146) (actual time=37557.486..68337.525 rows=12642995 loops=1)
Sort Key: (btrim(lower((ip.line_code)::text)))
Sort Method: external merge Disk: 1233688kB
Buffers: shared hit=94390 read=77996, temp read=154212 written=154212
-> Seq Scan on import_parts ip (cost=0.00..362044.24 rows=12643883 width=146) (actual time=0.027..11903.240 rows=12643918 loops=1)
Filter: (((status IS NULL) OR ((status)::text <> '6'::text)) AND (distributor_id = 196))
Buffers: shared hit=94390 read=77996
Planning time: 0.169 ms
Execution time: 151561.250 ms

更新 4

鉴于我们的 AWS RDS 实例的容量为 1000 IOPS,看起来下面屏幕截图中的读取 iops 是否过高?这是否可能是更新查询有时会卡住并且 autovacuum 需要很长时间才能完成运行的原因?

读取 IOPS:

read iops

读取延迟:

read latency

写入延迟:

write latency

交换用法:

swap usage

队列深度:

queue depth

重启我们的 AWS RDS 实例后 EXPLAIN (ANALYZE, BUFFERS) 的结果:

EXPLAIN (ANALYZE, BUFFERS)
update import_parts ip
set part_manufacturer_id = pslc.part_manufacturer_id
from parts.part_supplier_line_codes pslc
where trim(lower(ip.line_code)) = trim(lower(pslc.supplier_line_code))
and (ip.status is null or ip.status != '6')
and ip.distributor_id = pslc.distributor_id
and ip.distributor_id = 196;

Update on import_parts ip (cost=3111484.57..3919788.11 rows=31071345 width=156) (actual time=180680.200..180680.200 rows=0 loops=1)
Buffers: shared hit=62263174 read=712018 dirtied=386277 written=223564, temp read=237087 written=390433
-> Merge Join (cost=3111484.57..3919788.11 rows=31071345 width=156) (actual time=64687.806..112959.396 rows=10326988 loops=1)
Merge Cond: ((btrim(lower((pslc.supplier_line_code)::text))) = (btrim(lower((ip.line_code)::text))))
Buffers: shared hit=5 read=325434, temp read=237087 written=390433
-> Sort (cost=58.61..59.85 rows=493 width=17) (actual time=4.238..5.549 rows=494 loops=1)
Sort Key: (btrim(lower((pslc.supplier_line_code)::text)))
Sort Method: quicksort Memory: 63kB
Buffers: shared hit=5 read=11
-> Bitmap Heap Scan on part_supplier_line_codes pslc (cost=7.40..36.56 rows=493 width=17) (actual time=2.582..3.242 rows=494 loops=1)
Recheck Cond: (distributor_id = 196)
Heap Blocks: exact=7
Buffers: shared read=11
-> Bitmap Index Scan on index_part_supplier_line_codes_on_distributor_id (cost=0.00..7.28 rows=493 width=0) (actual time=1.805..1.805 rows=494 loops=1)
Index Cond: (distributor_id = 196)
Buffers: shared read=4
-> Materialize (cost=3111425.95..3174450.99 rows=12605008 width=146) (actual time=64683.559..105123.024 rows=12642995 loops=1)
Buffers: shared read=325423, temp read=237087 written=390433
-> Sort (cost=3111425.95..3142938.47 rows=12605008 width=146) (actual time=64683.554..96764.494 rows=12642995 loops=1)
Sort Key: (btrim(lower((ip.line_code)::text)))
Sort Method: external merge Disk: 1233528kB
Buffers: shared read=325423, temp read=237087 written=237087
-> Seq Scan on import_parts ip (cost=0.00..514498.12 rows=12605008 width=146) (actual time=0.748..36768.509 rows=12643918 loops=1)
Filter: (((status IS NULL) OR ((status)::text <> '6'::text)) AND (distributor_id = 196))
Buffers: shared read=325423
Planning time: 23.127 ms
Execution time: 180803.124 ms

我重新启动了 RDS 实例以清除 PostgreSQL 的缓存以查看是否存在缓存问题。我在某处读到重启 PostgreSQL 会清除数据库的缓存。

最佳答案

Autovacuum 永远不会阻止 UPDATE,反之亦然。这是 VACUUM 的基本设计原则,否则 PostgreSQL 将无法正常工作。

autovacuum 对 UPDATE 进程的唯一影响是通过共享资源,很可能是 I/O。 VACUUM 导致 I/O 负载,它使用内存和 CPU 电源。因此,您可能想检查 autovacuum 运行时这些资源中是否有任何资源不足。如果是,答案将是转向更强大的硬件:因为机器太蹩脚而减慢 autovacuum 的速度是个坏主意;从长远来看,这将导致臃肿和其他问题。

关于postgresql - autovacuum (VACUUM) 是这个 PostgreSQL UPDATE 查询偶尔需要几个小时才能完成运行的原因吗?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62557194/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com