gpt4 book ai didi

sql - Postgres 通过 ODBC 查询慢了一个数量级?

转载 作者:太空狗 更新时间:2023-10-30 01:49:02 24 4
gpt4 key购买 nike

我们有一个应用程序通过 psqlodbc 09.00.0200 驱动程序以下列方式从 PostgreSQL 9.0.3 数据库获取一些数据:

1) SQLExecDirect with START TRANSACTION
2) SQLExecDirect 与

DECLARE foo SCROLL CURSOR FOR  
SELECT table.alotofcolumns
FROM table
ORDER BY name2, id LIMIT 10000

3) SQLPrepare with

SELECT table.alotofcolumns, l01.languagedescription
FROM fetchcur('foo', ?, ?) table (column definitions)
LEFT OUTER JOIN languagetable l01 ON (l01.lang = 'EN'
AND l01.type = 'some type'
AND l01.grp = 'some group'
AND l01.key = table.somecolumn)
[~20 more LEFT OUTER JOINS in the same style, but for an other column]

4) SQLExecute,param1 设置为 SQL_FETCH_RELATIVE,param2 设置为 1
5) SQLExecute param1 设置为 SQL_FETCH_RELATIVE param2 设置为 -1
6) SQLExecute param1 设置为 SQL_FETCH_RELATIVE param2 设置为 0
7) 全部释放,关闭游标,结束事务

函数 fetchcur 执行 FETCH RELATIVE $3 IN $1 INTO rec,其中 rec 是一条记录并返回该记录。步骤 4-6 根据用户请求一次又一次地执行,同时在此事务中执行了更多的查询。在发出另一个用户请求之前也可能需要相当长的时间。通常查询需要那么长时间:

4) ~ 130 毫秒
5) ~ 115 毫秒
6) ~ 110 毫秒

对于快速的用户体验来说,这通常太慢了。所以我在\timing on 的情况下从 psql 命令行尝试了相同的语句。对于第 3-6 步,我使用了该语句:

3)

PREPARE p_foo (INTEGER, INTEGER) AS 
SELECT table.alotofcolumns, l01.languagedescription
FROM fetchcur('foo', $1, $2) table (column definitions)
LEFT OUTER JOIN languagetable l01 ON (l01.lang = 'EN'
AND l01.type = 'some type'
AND l01.grp = 'some group'
AND l01.key = table.somecolumn)
[~20 more LEFT OUTER JOINS in the same style, but for an other column]

4-6)

EXPLAIN ANALYZE EXECUTE p_foo (6, x);

对于第一个 EXECUTE,它花费了 89 毫秒,但随后下降到大约 7 毫秒。即使我在执行之间等待几分钟,每次查询也保持在 10 毫秒以下。那么额外的 100 毫秒会去哪里呢?应用程序和数据库在同一系统上,因此网络延迟应该不是问题。每个 LEFT OUTER JOIN 只返回一行,并且该结果中只有一列被添加到结果集中。所以结果是一行有 ~200 列,主要是 VARCHAR 和 INTEGER 类型。但是,在同一台机器上传输大约 100 毫秒的数据不应该如此之多。所以任何提示都会有所帮助。

机器有 2 GB 的 RAM,参数设置为:

shared_buffers = 512MB
effective_cache_size = 256MB
work_mem = 16MB
maintenance_work_mem = 256MB
temp_buffers = 8MB
wal_buffers= 1MB


编辑:我刚刚发现如何从 psqlodbc 创建一个 mylog,但我无法在其中找到计时值。

EDIT2:还可以为每一行添加时间戳。这确实表明在 psqlodbc 驱动程序收到响应之前需要 >100ms。所以我再次尝试使用 psql 并添加选项 -h 127.0.0.1 以确保它也通过 TCP/IP。 psql 的结果是 <10ms。这怎么可能?

00:07:51.026 [3086550720][SQLExecute]
00:07:51.026 [3086550720]PGAPI_Execute: entering...1
00:07:51.026 [3086550720]PGAPI_Execute: clear errors...
00:07:51.026 [3086550720]prepareParameters was not called, prepare state:3
00:07:51.026 [3086550720]SC_recycle_statement: self= 0x943b1e8
00:07:51.026 [3086550720]PDATA_free_params: ENTER, self=0x943b38c
00:07:51.026 [3086550720]PDATA_free_params: EXIT
00:07:51.026 [3086550720]Exec_with_parameters_resolved: copying statement params: trans_status=6, len=10128, stmt='SELECT [..]'
00:07:51.026 [3086550720]ResolveOneParam: from(fcType)=-15, to(fSqlType)=4(23)
00:07:51.026 [3086550720]cvt_null_date_string=0 pgtype=23 buf=(nil)
00:07:51.026 [3086550720]ResolveOneParam: from(fcType)=4, to(fSqlType)=4(23)
00:07:51.026 [3086550720]cvt_null_date_string=0 pgtype=23 buf=(nil)
00:07:51.026 [3086550720] stmt_with_params = 'SELECT [..]'
00:07:51.027 [3086550720]about to begin SC_execute
00:07:51.027 [3086550720] Sending SELECT statement on stmt=0x943b1e8, cursor_name='SQL_CUR0x943b1e8' qflag=0,1
00:07:51.027 [3086550720]CC_send_query: conn=0x9424668, query='SELECT [..]'
00:07:51.027 [3086550720]CC_send_query: conn=0x9424668, query='SAVEPOINT _EXEC_SVP_0x943b1e8'
00:07:51.027 [3086550720]send_query: done sending query 35bytes flushed
00:07:51.027 [3086550720]in QR_Constructor
00:07:51.027 [3086550720]exit QR_Constructor
00:07:51.027 [3086550720]read 21, global_socket_buffersize=4096
00:07:51.027 [3086550720]send_query: got id = 'C'
00:07:51.027 [3086550720]send_query: ok - 'C' - SAVEPOINT
00:07:51.027 [3086550720]send_query: setting cmdbuffer = 'SAVEPOINT'
00:07:51.027 [3086550720]send_query: returning res = 0x8781c90
00:07:51.027 [3086550720]send_query: got id = 'Z'
00:07:51.027 [3086550720]QResult: enter DESTRUCTOR
00:07:51.027 [3086550720]QResult: in QR_close_result
00:07:51.027 [3086550720]QResult: free memory in, fcount=0
00:07:51.027 [3086550720]QResult: free memory out
00:07:51.027 [3086550720]QResult: enter DESTRUCTOR
00:07:51.027 [3086550720]QResult: exit close_result
00:07:51.027 [3086550720]QResult: exit DESTRUCTOR
00:07:51.027 [3086550720]send_query: done sending query 1942bytes flushed
00:07:51.027 [3086550720]in QR_Constructor
00:07:51.027 [3086550720]exit QR_Constructor
00:07:51.027 [3086550720]read -1, global_socket_buffersize=4096
00:07:51.027 [3086550720]Lasterror=11
00:07:51.133 [3086550720]!!! poll ret=1 revents=1
00:07:51.133 [3086550720]read 4096, global_socket_buffersize=4096
00:07:51.133 [3086550720]send_query: got id = 'T'
00:07:51.133 [3086550720]QR_fetch_tuples: cursor = '', self->cursor=(nil)
00:07:51.133 [3086550720]num_fields = 166
00:07:51.133 [3086550720]READING ATTTYPMOD
00:07:51.133 [3086550720]CI_read_fields: fieldname='id', adtid=23, adtsize=4, atttypmod=-1 (rel,att)=(0,0)
[last two lines repeated for the other columns]
00:07:51.138 [3086550720]QR_fetch_tuples: past CI_read_fields: num_fields = 166
00:07:51.138 [3086550720]MALLOC: tuple_size = 100, size = 132800
00:07:51.138 [3086550720]QR_next_tuple: inTuples = true, falling through: fcount = 0, fetch_number = 0
00:07:51.139 [3086550720]qresult: len=3, buffer='282'
[last line repeated for the other columns]
00:07:51.140 [3086550720]end of tuple list -- setting inUse to false: this = 0x87807e8 SELECT 1
00:07:51.140 [3086550720]_QR_next_tuple: 'C' fetch_total = 1 & this_fetch = 1
00:07:51.140 [3086550720]QR_next_tuple: backend_rows < CACHE_SIZE: brows = 0, cache_size = 0
00:07:51.140 [3086550720]QR_next_tuple: reached eof now
00:07:51.140 [3086550720]send_query: got id = 'Z'
00:07:51.140 [3086550720] done sending the query:
00:07:51.140 [3086550720]extend_column_bindings: entering ... self=0x943b270, bindings_allocated=166, num_columns=166
00:07:51.140 [3086550720]exit extend_column_bindings=0x9469500
00:07:51.140 [3086550720]SC_set_Result(943b1e8, 87807e8)
00:07:51.140 [3086550720]QResult: enter DESTRUCTOR
00:07:51.140 [3086550720]retval=0
00:07:51.140 [3086550720]CC_send_query: conn=0x9424668, query='RELEASE _EXEC_SVP_0x943b1e8'
00:07:51.140 [3086550720]send_query: done sending query 33bytes flushed
00:07:51.140 [3086550720]in QR_Constructor
00:07:51.140 [3086550720]exit QR_Constructor
00:07:51.140 [3086550720]read -1, global_socket_buffersize=4096
00:07:51.140 [3086550720]Lasterror=11
00:07:51.140 [3086550720]!!! poll ret=1 revents=1
00:07:51.140 [3086550720]read 19, global_socket_buffersize=4096
00:07:51.140 [3086550720]send_query: got id = 'C'
00:07:51.140 [3086550720]send_query: ok - 'C' - RELEASE
00:07:51.140 [3086550720]send_query: setting cmdbuffer = 'RELEASE'
00:07:51.140 [3086550720]send_query: returning res = 0x877cd30
00:07:51.140 [3086550720]send_query: got id = 'Z'
00:07:51.140 [3086550720]QResult: enter DESTRUCTOR
00:07:51.140 [3086550720]QResult: in QR_close_result
00:07:51.140 [3086550720]QResult: free memory in, fcount=0
00:07:51.140 [3086550720]QResult: free memory out
00:07:51.140 [3086550720]QResult: enter DESTRUCTOR
00:07:51.140 [3086550720]QResult: exit close_result
00:07:51.140 [3086550720]QResult: exit DESTRUCTOR

EDIT3:我意识到我之前没有在 psql 测试中使用来自 mylog 的相同查询。 psqlodbc 似乎没有为 SQLPrepare 和 SQLExecute 使用 PREPARE。它添加参数值并发送查询。正如 araqnid 所建议的,我将 log_duration 参数设置为 0,并将来自 postgresql 日志的结果与来自应用程序和 psql 的结果进行了比较。结果如下:

                            psql/app     pglog
query executed from app: 110 ms 70 ms
psql with PREPARE/EXECUTE: 10 ms 5 ms
psql with full SELECT: 85 ms 70 ms

那么如何解释这些值呢?似乎大部分时间都花在将完整查询发送到数据库(10000 个字符)并生成执行计划上。如果那是真的,将对 SQLPrepare 和 SQLExecute 的调用更改为通过 SQLExecDirect 执行的显式 PREPARE/EXECUTE 语句可以解决问题。有异议吗?

最佳答案

我终于找到了问题所在,psqlodbc 的 SQLPrepare/SQLExecute 默认不执行 PREPARE/EXECUTE。驱动程序本身构建 SELECT 并发送它。

解决方案是将 UseServerSidePrepare=1 添加到 odbc.ini 或 SQLDriverConnect 的 ConnectionString。从应用程序测量的一个查询的总执行时间从 >100 毫秒下降到 5-10 毫秒。

关于sql - Postgres 通过 ODBC 查询慢了一个数量级?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/5082789/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com