gpt4 book ai didi

mysql - 优化的SQL查询

转载 作者:行者123 更新时间:2023-11-29 08:57:29 26 4
gpt4 key购买 nike

表架构

对于这两个表,CREATE 查询如下:

表1:(file_path_key、dir_path_key)

create table Table1(file_path_key varchar(500), dir_path_key varchar(500), primary key(file_path_key)) engine = innodb;

示例,file_path_key =/home/playstation/a.txt
dir_path_key =/home/playstation/

表2:(file_path_key,hash_key)

create table Table2(file_path_key varchar(500) not null, hash_key bigint(20) not null, foreign key (file_path_key) references Table1(file_path_key) on update cascade on delete cascade) engine = innodb;

目标:

Given a hash value *H* and a directory string *D*, I need to find all those 
hashes which equal to *H* from Table2, such that, the corresponding file entry
doesn't have *D* as it's directory.

在这种特殊情况下,Table1 有大约 40,000 个条目,Table2 有 5,000,000 个条目,这使得我当前的查询非常慢。

select distinct s1.file_path_key from Table1 as s1 join (select * from Table2 where hash_key = H) as s2 on s1.file_path_key = s2.file_path_key and s1.dir_path_key !=D;

最佳答案

子选择确实不必要地减慢了查询速度。

您应该删除它并用一个简单的联接替换它,将所有与非联接相关的条件下推到 WHERE 子句中。

您还应该在 Table1.dir_path_key 和 Table2.hash_key 列上添加索引:

ALTER TABLE Table1
ADD INDEX dir_path_key dir_path_key(255);

ALTER TABLE Table2
ADD INDEX hash_key (hash_key);

尝试这样的查询:

select distinct s1.file_path_key 
from Table1 as s1
join Table2 as s2 on s1.file_path_key = s2.file_path_key
where s1.dir_path_key !=D
and s2.hash_key =H;

关于mysql - 优化的SQL查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/9588639/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com