gpt4 book ai didi

PostgreSQL 复杂求和查询

转载 作者:行者123 更新时间:2023-11-29 13:25:45 26 4
gpt4 key购买 nike

我有以下表格:

video (id, name) 

keyframe (id, name, video_id) /*video_id has fk on video.id*/

detector (id, concepts)

score (detector_id, keyframe_id, score) /*detector_id has fk on detector .id and keyframe_id has fk on keyframe.id*/

本质上,一个视频有多个与之关联的关键帧,每个关键帧都由所有检测器进行了评分。每个检测器都有一串概念,它将对关键帧进行评分。

现在,如果可能的话,我想在单个查询中找到以下内容:

给定一组检测器 ID(例如,最多 5 个),返回在这些检测器上得分最高的前 10 个视频。通过对每个检测器的每个视频的关键帧得分进行平均,然后对检测器得分求和来对它们进行评分。

例子:对于具有 3 个关联关键帧且 2 个检测器具有以下分数的视频:

detector_id | keyframe_id | score
1 1 0.0281
1 2 0.0012
1 3 0.0269
2 1 0.1341
2 2 0.9726
2 3 0.7125

这将给出视频的分数:

sum(avg(0.0281, 0.0012, 0.0269), avg(0.1341, 0.9726, 0.7125))

最终我想要以下结果:

video_id | score
1 0.417328
2 ...

我认为它必须是这样的,但我还没有完全做到:

select
(select
(select sum(avg_score) summed_score
from
(select
avg(s.score) avg_score
from score s
where s.detector_id = ANY(array[1,2,3,4,5]) and s.keyframe_id = kf.id) x)
from keyframe kf
where kf.video_id = v.id) y
from video v

我的分数表非常大(1 亿行),所以我希望它尽可能快(我尝试的所有其他选项都需要几分钟才能完成)。我总共有大约 3000 个视频、500 个检测器和每个视频大约 15 个关键帧。

如果不可能在不到 ~2 秒内完成此操作,那么我也愿意接受重构数据库架构的方法。数据库中可能根本没有插入/删除操作。

编辑:

感谢 GabrielsMessanger 我得到了答案,这里是查询计划:

EXPLAIN (analyze, verbose)
SELECT
v_id, sum(fd_avg_score)
FROM (
SELECT
v.id as v_id, k.id as k_id, d.id as d_id,
avg(s.score) as fd_avg_score
FROM
video v
JOIN keyframe k ON k.video_id = v.id
JOIN score s ON s.keyframe_id = k.id
JOIN detector d ON d.id = s.detector_id
WHERE
d.id = ANY(ARRAY[1,2,3,4,5]) /*here goes detector's array*/
GROUP BY
v.id,
k.id,
d.id
) sub
GROUP BY
v_id
;

.

"GroupAggregate  (cost=1865513.09..1910370.09 rows=200 width=12) (actual time=52141.684..52908.198 rows=2991 loops=1)"
" Output: v.id, sum((avg(s.score)))"
" Group Key: v.id"
" -> GroupAggregate (cost=1865513.09..1893547.46 rows=1121375 width=20) (actual time=52141.623..52793.184 rows=1121375 loops=1)"
" Output: v.id, k.id, d.id, avg(s.score)"
" Group Key: v.id, k.id, d.id"
" -> Sort (cost=1865513.09..1868316.53 rows=1121375 width=20) (actual time=52141.613..52468.062 rows=1121375 loops=1)"
" Output: v.id, k.id, d.id, s.score"
" Sort Key: v.id, k.id, d.id"
" Sort Method: external merge Disk: 37232kB"
" -> Hash Join (cost=11821.18..1729834.13 rows=1121375 width=20) (actual time=120.706..51375.777 rows=1121375 loops=1)"
" Output: v.id, k.id, d.id, s.score"
" Hash Cond: (k.video_id = v.id)"
" -> Hash Join (cost=11736.89..1711527.49 rows=1121375 width=20) (actual time=119.862..51141.066 rows=1121375 loops=1)"
" Output: k.id, k.video_id, s.score, d.id"
" Hash Cond: (s.keyframe_id = k.id)"
" -> Nested Loop (cost=4186.70..1673925.96 rows=1121375 width=16) (actual time=50.878..50034.247 rows=1121375 loops=1)"
" Output: s.score, s.keyframe_id, d.id"
" -> Seq Scan on public.detector d (cost=0.00..11.08 rows=5 width=4) (actual time=0.011..0.079 rows=5 loops=1)"
" Output: d.id, d.concepts"
" Filter: (d.id = ANY ('{1,2,3,4,5}'::integer[]))"
" Rows Removed by Filter: 492"
" -> Bitmap Heap Scan on public.score s (cost=4186.70..332540.23 rows=224275 width=16) (actual time=56.040..9961.040 rows=224275 loops=5)"
" Output: s.detector_id, s.keyframe_id, s.score"
" Recheck Cond: (s.detector_id = d.id)"
" Rows Removed by Index Recheck: 34169904"
" Heap Blocks: exact=192845 lossy=928530"
" -> Bitmap Index Scan on score_index (cost=0.00..4130.63 rows=224275 width=0) (actual time=49.748..49.748 rows=224275 loops=5)"
" Index Cond: (s.detector_id = d.id)"
" -> Hash (cost=3869.75..3869.75 rows=224275 width=8) (actual time=68.924..68.924 rows=224275 loops=1)"
" Output: k.id, k.video_id"
" Buckets: 16384 Batches: 4 Memory Usage: 2205kB"
" -> Seq Scan on public.keyframe k (cost=0.00..3869.75 rows=224275 width=8) (actual time=0.003..33.662 rows=224275 loops=1)"
" Output: k.id, k.video_id"
" -> Hash (cost=46.91..46.91 rows=2991 width=4) (actual time=0.834..0.834 rows=2991 loops=1)"
" Output: v.id"
" Buckets: 1024 Batches: 1 Memory Usage: 106kB"
" -> Seq Scan on public.video v (cost=0.00..46.91 rows=2991 width=4) (actual time=0.005..0.417 rows=2991 loops=1)"
" Output: v.id"
"Planning time: 2.136 ms"
"Execution time: 52914.840 ms"

最佳答案

免责声明:

我的最终答案是基于评论和与作者的扩展聊天讨论。需要注意的一件事:每个 keyframe_id 只分配给一个视频

原答案:

这不是像下面的查询那么简单吗?:

SELECT
v_id, sum(fd_avg_score)
FROM (
SELECT
v.id as v_id, k.id as k_id, s.detector_id as d_id,
avg(s.score) as fd_avg_score
FROM
video v
JOIN keyframe k ON k.video_id = v.id
JOIN score s ON s.keyframe_id = k.id
WHERE
s.detector_id = ANY(ARRAY[1,2,3,4,5]) /*here goes detector's array*/
GROUP BY
v.id,
k.id,
detector_id
) sub
GROUP BY
v_id
LIMIT 10
;

首先在子查询中,我们将视频与其关键帧和关键帧与分数连接起来。我们计算每个视频的平均得分,每个视频的每个关键帧和每个检测器(如您所说)。最后,在主查询中,我们汇总了每个视频的 avg_score。

性能

正如作者所指出的,他在每个表的 id 列上都有 PRIMARY KEYS,并且在表 score(detector_id, keyrame_id) 上也有复合索引>。这足以快速运行此查询。

但是,在测试作者需要进一步优化的同时。所以有两件事:

  1. 记住始终对表执行VACUUM ANALYZE,尤其是当您插入 1 亿行时(如 score 表)。所以至少执行 VACUUM ANALYZE score
  2. 为了尝试优化更多,我们可以将 score(detector_id, keyrame_id) 上的复合索引更改为 score(detector_id, keyrame_id, score) 上的复合索引。它可能允许 PostgreSQL 在计算平均值时使用 Index Only Scan

关于PostgreSQL 复杂求和查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34039523/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com