gpt4 book ai didi

MySQL InnoDB : Best practice bulk insert

转载 作者:行者123 更新时间:2023-11-29 01:42:24 24 4
gpt4 key购买 nike

我正在尝试将 2x 700 000 条记录插入到 InnoDB 表中,但在我看来这相当慢。

我已经尝试了几件事,但我不确定实现最有效插入方式的最佳方式是什么。

创建表sql:

DROP TABLE IF EXISTS `booking_daily_analysis`;
CREATE TABLE IF NOT EXISTS `booking_daily_analysis` (
`id` INT NOT NULL AUTO_INCREMENT,
`booking_id` INT NULL,
`action_id` INT NOT NULL,
`creative_id` INT NULL,
`position_id` INT NULL,
`profile_id` INT NULL,
`start` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`end` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`hits` INT NOT NULL DEFAULT 0,
`uniqueHits` INT NOT NULL DEFAULT 0 COMMENT 'contacts van vroeger',
PRIMARY KEY (`id`,`action_id`)
#INDEX `booking_id_idx` (`booking_id` ASC),
#FOREIGN KEY (`booking_id`) REFERENCES `booking` (`id`) ON DELETE SET NULL ON UPDATE CASCADE,
#INDEX `creative_id_idx` (`creative_id` ASC),
#FOREIGN KEY (`creative_id`) REFERENCES `creative` (`id`) ON DELETE SET NULL ON UPDATE CASCADE,
#INDEX `position_id_idx` (`position_id` ASC),
#FOREIGN KEY (`position_id`) REFERENCES `position` (`id`) ON DELETE SET NULL ON UPDATE CASCADE,
#INDEX `action_id_idx` (`action_id` ASC),
#FOREIGN KEY (`action_id`) REFERENCES `action` (`id`) ON DELETE NO ACTION ON UPDATE CASCADE,
#INDEX `profile_id_idx` (`profile_id` ASC),
#FOREIGN KEY (`profile_id`) REFERENCES `profile` (`id`) ON DELETE SET NULL ON UPDATE CASCADE
) ENGINE=InnoDB DEFAULT CHARACTER SET=utf8;

如您所见,有很多索引和外键(innoDb 需要为每个外键创建一个索引)但索引会减慢插入速度,因此我尝试在插入后通过 alter 语句添加它们:

START TRANSACTION;
alter table `booking_daily_analysis` add index `booking_id_idx` (`booking_id` ASC), add constraint `fk_booking_id` foreign key (`booking_id`) REFERENCES `booking` (`id`) on delete set null on update cascade;
alter table `booking_daily_analysis` add index `creative_id_idx` (`creative_id` ASC), add constraint `fk_creative_id` foreign key (`creative_id`) references `creative` (`id`) on delete set null on update cascade;
alter table `booking_daily_analysis` add index `position_id_idx` (`position_id` ASC), add constraint `fk_position_id` foreign key (`position_id`) references `position` (`id`) on delete set null on update cascade;
alter table `booking_daily_analysis` add index `action_id_idx` (`action_id` ASC), add constraint `fk_action_id` foreign key (`action_id`) references `action` (`id`) on delete set null on update cascade;
alter table `booking_daily_analysis` add index `profile_id_idx` (`profile_id` ASC), add constraint `fk_profile_id` foreign key (`profile_id`) references `profile` (`id`) on delete set null on update cascade;
COMMIT;

不确定是否需要交易。

在脚本的顶部,我指定了这些选项:

SET foreign_key_checks=0;
SET unique_checks=0;

底部:

SET unique_checks = 1;
SET foreign_key_checks = 1;

还有 2x 700 000 插入语句(只有 2 行)

START TRANSACTION;
insert into nrc.booking_daily_analysis (id, action_id, start, end, hits, uniqueHits, position_id, booking_id, creative_id, profile_id)
select id, 1, start, end, impressions, contacts, position_id, booking_id, creative_id, new_profile_id from adhese_nrc.temp_ad_slot_ids;
COMMIT;

START TRANSACTION;
-- Insert clicks for click action (click action is 2)
insert into nrc.booking_daily_analysis (id, action_id, start, end, hits, uniqueHits, position_id, booking_id, creative_id, profile_id)
select id, 2, start, end, clicks, 0, position_id, booking_id, creative_id, new_profile_id from adhese_nrc.temp_ad_slot_ids;
COMMIT;

如您所见,插入中的唯一区别是操作 ID (1 -> 2)。

所以我想知道,这是要走的路还是我在这里遗漏了什么?

MySQL 工作台的最新输出:

14:32:13    START TRANSACTION   0 row(s) affected   0.000 sec

14:32:13 FIRST INSERT 717718 row(s) affected Records: 717718 @ 11.263 sec

14:32:24 COMMIT 0 row(s) affected 0.020 sec
14:32:24 START TRANSACTION 0 row(s) affected 0.000 sec

14:32:24 SECOND INSERT 717718 row(s) affected Records: 717718 @ 21.268 sec

14:32:46 COMMIT 0 row(s) affected 0.011 sec
14:32:46 START TRANSACTION 0 row(s) affected 0.000 sec

14:32:46 add index `booking_id_idx` 1435436 row(s) affected Records: 1435436 @ 39.393 sec
14:33:25 add index `creative_id_idx 1435436 row(s) affected Records: 1435436 @ 68.801 sec
14:34:34 add index `position_id_idx` 1435436 row(s) affected Records: 1435436 Duplicates: 0 Warnings: 0 @ 142.877 sec
14:36:57 add index `action_id_idx` 1435436 row(s) affected Records: 1435436 Duplicates: 0 Warnings: 0 @ 162.160 sec
14:40:00 add index `profile_id_idx` 1435436 row(s) affected Records: 1435436 Duplicates: 0 Warnings: 0 @ 763.309 sec

最佳答案

This manual page还建议更改 innodb_autoinc_lock_mode

如果您不需要该功能,disable binary logging .

增加一些 InnoDB buffers 的大小也可以提供帮助(特别是 innodb_buffer_pool_size)。

我认为在这种情况下使用事务是不可取的。如果需要在同一个事务中应用少量连续更改,则可以通过将这些更改合并到一个单独的写入中来优化这些更改。就您而言,我相信您只会无用地加载重做日志。

这让我想到了另一个建议:尝试一次插入较少数量的行,如下所示:

INSERT INTO destination
SELECT * FROM source LIMIT 0, 10000;

INSERT INTO destination
SELECT * FROM source LIMIT 10000, 10000; -- and so on

最后,如果您有大量可用内存,您可能希望手动将整个数据加载到一个临时内存表中,然后从该内存表中插入到您的目的地(可能是小批量):

CREATE TEMPORARY TABLE destination_tmp LIKE source;
ALTER destination_tmp ENGIN=MEMORY;
INSERT INTO destination_tmp SELECT * FROM source;
INSERT INTO destination SELECT * FROM destination_tmp;

确保 max_heap_table_size 的值足够大.

关于MySQL InnoDB : Best practice bulk insert,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17044470/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com