gpt4 book ai didi

mysql - 为什么我的 MySQL 在从 GCP 迁移到 Azure 后变慢了?

转载 作者:行者123 更新时间:2023-12-05 07:00:56 25 4
gpt4 key购买 nike

我将我的 MySQL 数据库从 GCP 迁移到 Azure(均为 5.7),但它似乎影响了性能。

Server before migration: 2 VCPUS with 7.5GB memory
Server after migration: 2 VCPUS with 8GB memory

两台服务器都运行/运行 MySQL 服务器的 5.7 版。我的数据库目前大小约为 6GB,每天增长 100MB 以上。它仅包含 32 个表,尽管其中一小部分表进入了数百万行类别。

我阅读了 innodb_buffer_pool_size,GCP 显然将其设置为大约 80% 的内存,这将使它达到 6GB。我已将新服务器上的 innodb_buffer_pool_size 设置为相同的值。

在更新此值之前(当我第一次注意到性能下降时),innodb_buffer_pool_size 在新服务器上设置为 0.1 GB,然后我决定将其更新为 GCP 服务器希望设置的值这会有所帮助。

正在关注 this文档我能够更新缓冲池大小。

我最初是如何检查 innodb_buffer_pool_size 的?

-- returned 0.111...
SELECT @@innodb_buffer_pool_size/1024/1024/1024;

我是如何更新 innodb_buffer_pool_size 的?

SET GLOBAL innodb_buffer_pool_size=6442450944;

我用这个查询检查了调整大小状态,

-- returned 'Completed resizing buffer pool at 200920 13:46:20.'
SHOW STATUS WHERE Variable_name='InnoDB_buffer_pool_resize_status';

我每秒执行大约 2 个查询,每天的峰值为 250k。我不能确定,但​​这种用法应该不足以停止性能吗?

我如何检查性能?

我已经显示了运行的查询列表,以及服务器响应所需的时间。我在 Navicat、Datagrip 和 CLI 中测试了这些查询,结果相似。

我不确定要在此处包含哪些查询以提供尽可能多的信息,因此如果我没有包含任何有用的信息,我可以根据要求进行更新。

-- Fetching 100k rows from a 3.1m rows table
-- Time took: 21.248s
SELECT * FROM `profile_connections` LIMIT 100000;

-- (SECOND TIME) Fetching 100k rows from a 3.1m rows table
-- Time took: 1.735s
SELECT * FROM `profile_connections` LIMIT 100000;

- Fetching a random row from a 3.1m row table
-- Time took: 0.857s
SELECT * FROM `profile_connections` WHERE `id` = 2355895 LIMIT 1;

-- (SECOND TIME) Fetching a random row from a 3.1m row table
-- Time took: 0.850s
SELECT * FROM `profile_connections` WHERE `id` = 2355895 LIMIT 1;

-- Fetching all rows from a 20 row table
-- Time took: 40.010s
SELECT * FROM `profile_types`

-- (SECOND) Fetching all rows from a 20 row table
-- Time took: 0.850s
SELECT * FROM `profile_types`

但有时,我可以运行上述所有查询并在 2 到 5 秒内得到响应。性能似乎时好时坏,同一查询所花费的时间存在巨大差异,具体取决于我目前正在努力诊断的运行时间。

我跑了 mysqltuner并得到这些性能指标:

[--] Up for: 47m 39s (38K q [13.354 qps], 1K conn, TX: 403M, RX: 63M)
[--] Reads / Writes: 50% / 50%
[--] Binary logging is disabled
[--] Physical Memory : 7.8G
[--] Max MySQL memory : 146.8G
[--] Other process memory: 0B
[--] Total buffers: 6.0G global + 954.7M per thread (151 max threads)
[--] P_S Max memory usage: 72B
[--] Galera GCache Max memory usage: 0B
[!!] Maximum reached memory usage: 21.9G (281.61% of installed RAM)
[!!] Maximum possible memory usage: 146.8G (1888.34% of installed RAM)
[!!] Overall possible memory usage with other process exceeded memory
[OK] Slow queries: 3% (1K/38K)
[OK] Highest usage of available connections: 11% (17/151)
[OK] Aborted connections: 0.67% (9/1342)
[!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
[OK] Query cache is disabled by default due to mutex contention on multiprocessor machines.
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 41 sorts)
[OK] No joins without indexes
[OK] Temporary tables created on disk: 4% (82 on disk / 1K total)
[OK] Thread cache hit rate: 98% (17 created / 1K connections)
[OK] Table cache hit rate: 63% (667 open / 1K opened)
[OK] table_definition_cache(1400) is upper than number of tables(302)
[OK] Open file limit used: 1% (55/5K)
[OK] Table locks acquired immediately: 100% (1K immediate / 1K locks)

慢查询日志我运行了很多相同的查询,所以我将其截断以仅包含几个。

# Time: 2020-09-20T16:45:04.230173Z
# User@Host: root[root] @ [51.132.38.176] Id: 7
# Query_time: 1.022011 Lock_time: 0.000084 Rows_sent: 1 Rows_examined: 1058161
SET timestamp=1600620304;
SELECT @id := `id`,`item`
FROM `queue_items`
WHERE `processed_at` IS NULL AND `completed_at` IS NULL AND `confirmed` = '1'ORDER BY `id` ASC
LIMIT 1
FOR UPDATE;
# Time: 2020-09-20T16:45:09.676613Z
# User@Host: root[root] @ [51.132.38.176] Id: 5
# Query_time: 1.198063 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
SET timestamp=1600620309;
COMMIT;
# Time: 2020-09-20T16:45:22.938081Z
# User@Host: root[root] @ [51.105.34.135] Id: 4
# Query_time: 5.426964 Lock_time: 0.000133 Rows_sent: 0 Rows_examined: 1
SET timestamp=1600620322;
UPDATE `queue_items` SET `completed_at` = '2020-09-20 16:45:17', `updated_at` = '2020-09-20 16:45:17' WHERE `id` = 1818617;

queue_items 表的结构,根据要求。

/*
Navicat MySQL Data Transfer

Target Server Type : MySQL
Target Server Version : 50731
File Encoding : 65001

Date: 20/09/2020 21:28:41
*/

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;

-- ----------------------------
-- Table structure for queue_items
-- ----------------------------
DROP TABLE IF EXISTS `queue_items`;
CREATE TABLE `queue_items` (
`id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`item` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
`worker_id` bigint(20) UNSIGNED NOT NULL DEFAULT 0,
`type_id` bigint(20) UNSIGNED NOT NULL,
`confirmed` enum('0','1') CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
`optimized` enum('0','1') CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
`owner_id` bigint(20) NOT NULL DEFAULT 0,
`processed_at` datetime(0) NULL DEFAULT NULL,
`completed_at` datetime(0) NULL DEFAULT NULL,
`created_at` timestamp(0) NULL DEFAULT NULL,
`updated_at` timestamp(0) NULL DEFAULT NULL,
`json_data` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NULL,
`check_state` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NULL DEFAULT 'unchecked',
PRIMARY KEY (`id`) USING BTREE,
UNIQUE INDEX `queue_items_item_unique`(`item`) USING BTREE,
INDEX `queue_items_type_id_foreign`(`type_id`) USING BTREE,
CONSTRAINT `queue_items_type_id_foreign` FOREIGN KEY (`type_id`) REFERENCES `queue_types` (`id`) ON DELETE RESTRICT ON UPDATE RESTRICT
) ENGINE = InnoDB AUTO_INCREMENT = 6515764 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci ROW_FORMAT = DYNAMIC;

SET FOREIGN_KEY_CHECKS = 1;

最佳答案

两台服务器的性能差异可能与它们的整体配置有关(例如处理器、计算能力、存储等)。您对这两台机器的预期性能有任何了解吗?

然后,如果在 profile_types 上有写锁和长查询,或者元数据锁完全停止查询,这些结果是预期的:

-- Fetching all rows from a 20 row table
-- Time took: 40.010s
SELECT * FROM `profile_types`

-- (SECOND) Fetching all rows from a 20 row table
-- Time took: 0.850s
SELECT * FROM `profile_types`

我注意到的一件事是您似乎没有针对某些查询的索引。尝试创建这个索引:

CREATE INDEX queue_ndx ON queue_items(confirmed, processed_at, completed_at);

关于mysql - 为什么我的 MySQL 在从 GCP 迁移到 Azure 后变慢了?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63982542/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com