gpt4 book ai didi

mysql - 偏移量为 ("LIMIT 500000, 10"的限制)即使在索引之后也很慢?

转载 作者:可可西里 更新时间:2023-11-01 07:47:46 27 4
gpt4 key购买 nike

我有一个包含int 字段的表,我们称它为createTime。该表由几百万条记录组成。现在我想运行查询:

select * from `table` order by `createTime` desc limit 500000, 10

我已经为 createTime 创建了一个索引,但是查询运行得非常慢。什么原因?我该如何改进它?

EXPLAIN 的内容如下:

id 1
select_type simple
table table
type index
possible_keys null
key createTime
key_len 4
ref null
rows 500010
extra

至于偏移量,它在较小时工作得更快。

最佳答案

一般规则:avoid OFFSET for large tables .

[A]s the offset increases, the time taken for the query to execute progressively increases, which can mean processing very large tables will take an extremely long time. The reason is because offset works on the physical position of rows in the table which is not indexed. So to find a row at offset x, the database engine must iterate through all the rows from 0 to x.

The general rule of thumb is “never use offset in a limit clause”. For small tables you probably won’t notice any difference, but with tables with over a million rows you’re going to see huge performance increases.

关于mysql - 偏移量为 ("LIMIT 500000, 10"的限制)即使在索引之后也很慢?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/8467104/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com