gpt4 book ai didi

Mysql 更新查询需要很长时间才能完成

转载 作者:行者123 更新时间:2023-11-29 20:11:05 25 4
gpt4 key购买 nike

48 小时后我终止了查询..

TableA = with 15Million rows (temp_pull_newconsumer_boatowners)
TableB = 131060747 rows (master_consumer_export_06172013_FullMatchBack_Final)

#===============================================
UPDATE temp_pull_newconsumer_boatowners a,master_consumer_export_06172013_FullMatchBack_Final b
SET a.email = b.reg_email
WHERE a.primaryaddress=b.DeliveryLine1
AND a.personlastname=b.reg_lastname
AND LEFT(a.personfirstname,1) = LEFT(b.reg_firstname,1)
AND a.cityname=b.city
AND a.state =b.state
AND IFNULL(b.DeliveryLine1,'')<>''
AND IFNULL(a.primaryaddress,'')<>''
AND IFNULL(b.reg_email,'')<>''
AND IFNULL(a.personfirstname,'')<>''
AND IFNULL(b.reg_firstname,'')<>''
AND IFNULL(a.personlastname,'')<>''
AND IFNULL(b.reg_lastname,'')<>''
AND IFNULL(a.cityname,'')<>''
AND IFNULL(b.city,'')<>''
AND IFNULL(a.state,'')<>''
AND IFNULL(b.state,'')<>''
AND IFNULL(a.email,'')=''
#==============================

=========Explain Extended===============

id: 1
select_type: SIMPLE
table: a
type: ALL
possible_keys: inddddddd_09,ind_909090900999
key: NULL
key_len: NULL
ref: NULL
rows: 15144363
filtered: 100.00
Extra: Using where
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: b
type: ref
possible_keys: ind_999900_0090_I,ind_9090909999,ind_9090909999Ti
key: ind_999900_0090_I
key_len: 103
ref: load_file.a.primaryaddress
rows: 1
filtered: 100.00
Extra: Using where




-All table fields are of varchar type
-Fields are properly Indexed
-16GB memory
-It takes about 25 minutes to update 5k records
(if i add an ID (primary key) field in the tableA and update on a condition
where id between 1 and 500000 )
-There is no conversion happening (checked in explain extended warning)
-The issue started since i moved mysql data directories to another drive (same type of SSD)


below is my.cnf

[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0

[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/mysqldata
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address = 0.0.0.0

# Fine Tuning

innodb_buffer_pool_size=12G
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M

[mysqldump]
quick
quote-names
max_allowed_packet = 16M

[mysql]

[isamchk]
key_buffer = 16M

!includedir /etc/mysql/conf.d/

#==========================================

我看到它扫描了 15M 行,我如何查看到目前为止已扫描了多少行?我可以对 my.cnf 进行哪些配置更改来加快查询速度?

我怀疑 MY.CNF 中存在问题,因为在我将数据目录移动到另一个驱动器后问题就开始了,而且我可能对全局变量进行了一些更改,而自从我重新启动 MYSQL 以来我没有进行过更改服务,现在不记得了。

下面显示了两个表的创建表......

CREATE TABLE `master_consumer_export_06172013_FullMatchBack_Final` (
`reg_source` varchar(100) DEFAULT NULL,
`reg_addDate` varchar(100) DEFAULT NULL,
`reg_firstName` varchar(100) DEFAULT NULL,
`reg_lastName` varchar(100) DEFAULT NULL,
`reg_add1` varchar(100) DEFAULT NULL,
`reg_city` varchar(100) DEFAULT NULL,
`reg_state` varchar(100) DEFAULT NULL,
`reg_zip` varchar(100) DEFAULT NULL,
`reg_phone` varchar(100) DEFAULT NULL,
`reg_email` varchar(100) DEFAULT NULL,
`reg_optinUrlClean` varchar(100) DEFAULT NULL,
`reg_IPClean` varchar(100) DEFAULT NULL,
`reg_dateTime` varchar(100) DEFAULT NULL,
`reg_dateStandard` varchar(100) DEFAULT NULL,
`duplicate` varchar(100) DEFAULT NULL,
`DeliveryLine1` varchar(100) DEFAULT NULL,
`DeliveryLine2` varchar(100) DEFAULT NULL,
`city` varchar(100) DEFAULT NULL,
`state` varchar(100) DEFAULT NULL,
`ZIPCode` varchar(100) DEFAULT NULL,
`FullZIPCode` varchar(100) DEFAULT NULL,
`Latitude` varchar(100) DEFAULT NULL,
`Longitude` varchar(100) DEFAULT NULL,
`Precision` varchar(100) DEFAULT NULL,
`DeliveryPointBarcode` varchar(100) DEFAULT NULL,
`CarrierRoute` varchar(100) DEFAULT NULL,
`CountyFIPS` varchar(100) DEFAULT NULL,
`CountyName` varchar(100) DEFAULT NULL,
`CongressionalDistrict` varchar(100) DEFAULT NULL,
`Deliverable` varchar(100) DEFAULT NULL,
`RecordType` varchar(100) DEFAULT NULL,
`RDI` varchar(100) DEFAULT NULL,
`CMRA` varchar(100) DEFAULT NULL,
`processingDate` varchar(100) DEFAULT NULL,
`suppressed_by_master_suppression` varchar(100) DEFAULT NULL,
`master_consumer_id` varchar(100) DEFAULT NULL,
`quickiesuppressioncode` varchar(100) DEFAULT NULL,
`EmailUploadedOnQuickie` varchar(100) DEFAULT NULL,
`MC` varchar(100) DEFAULT NULL,
`col` varchar(100) DEFAULT NULL,
`IsBadEMail` varchar(100) DEFAULT NULL,
`Domain_From_Email` varchar(100) DEFAULT NULL,
`Fgx_rdi` varchar(100) DEFAULT NULL,
`Fgx_Email` varchar(100) DEFAULT NULL,
KEY `ind_9090909` (`reg_email`),
KEY `ind_90909address` (`reg_add1`),
KEY `ind_999900_0090_I` (`DeliveryLine1`),
KEY `ind_9090909999` (`state`),
KEY `ind_9090909999Ti` (`city`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

和..

CREATE TABLE `temp_pull_newconsumer_boatowners__Final` (
`personfirstname` varchar(100) DEFAULT NULL,
`personlastname` varchar(100) DEFAULT NULL,
`primaryaddress` varchar(100) DEFAULT NULL,
`secondaryaddress` varchar(100) DEFAULT NULL,
`cityname` varchar(100) DEFAULT NULL,
`state` varchar(100) DEFAULT NULL,
`ZipCode` varchar(100) DEFAULT NULL,
`Phone` varchar(100) DEFAULT NULL,
`Email` varchar(100) DEFAULT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`),
KEY `ind_9099898778` (`Email`),
KEY `dind_9099898778` (`primaryaddress`),
KEY `dind_909008989` (`state`),
KEY `inddd_909008989` (`cityname`),
KEY `ind_090909` (`secondaryaddress`)
) ENGINE=InnoDB AUTO_INCREMENT=15499921 DEFAULT CHARSET=latin1

最佳答案

我也有类似的情况,我被临时表救了。我将所有内容插入到临时表中,并使用临时表的更新联接来更新原始表。令人惊讶的是它进展得相当快。

关于Mysql 更新查询需要很长时间才能完成,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40114149/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com