gpt4 book ai didi

tensorflow-federated - 在每轮参与者数量不同的情况下训练 FL 模型时内存消耗激增

转载 作者:行者123 更新时间:2023-12-04 15:03:34 24 4
gpt4 key购买 nike

我正在按照 image classification 运行 FL 算法教程。根据预定义的参与者数量列表,每一轮的参与者数量都不同。

number_of_participants_each_round = 
[108, 113, 93, 92, 114, 101, 94, 93, 107, 99, 118, 101, 114, 111, 88,
101, 86, 96, 110, 80, 118, 84, 91, 120, 110, 109, 113, 96, 112, 107,
119, 91, 97, 99, 97, 104, 103, 120, 89, 100, 104, 104, 103, 88, 108]

联合数据在开始训练之前进行预处理和批处理。


NUM_EPOCHS = 5
BATCH_SIZE = 20
SHUFFLE_BUFFER = 418
PREFETCH_BUFFER = 10

def preprocess(dataset):
def batch_format_fn(element):
return collections.OrderedDict(
x=tf.reshape(element['pixels'], [-1, 784]),
y=tf.reshape(element['label'], [-1, 1]))

return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER).batch(
BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)


def make_federated_data(client_data, client_ids):
return [preprocess(client_data.create_tf_dataset_for_client(x)) for x in client_ids]

federated_train_data = make_federated_data(data_train, data_train.client_ids)

每轮根据number_of_participants_each_roundfederated_train_data[0:expected_total_clients]中随机抽取参与者,然后执行iterative_process45 轮

expected_total_clients = 500
round_nums = 45

for round_num in range(0, round_nums):
sampled_clients =
np.random.choice(a=federated_train_data[0:expected_total_clients],
size=number_of_participants_each_round[round_num],
replace=False)

state, metrics = iterative_process.next(state, list(sampled_clients))
print('round {:2d}, metrics={}'.format(round_num + 1, metrics))

问题是 VRAM 使用率在几轮后迅速爆炸,在 6~7 轮达到 5.5 GB,并且还在不断增加以大约 0.8 GB/round 的速率进行训练,直到训练最终在 25~26 轮崩溃,其中 VRAM 达到 17 GB +4000 已创建 python 线程。

错误信息如下

F tensorflow/core/platform/default/env.cc:72] Check failed: ret == 0 (35 vs. 0)Thread creation via pthread_create() failed.

### 故障排除###

number_of_participants_each_round 减少到 20 可以完成训练,但内存消耗仍然很大并且还在增长。

以每轮固定数量的参与者运行相同的代码,在整个训练过程中内存消耗很好,总共大约 1.5 ~ 2.0 GB VRAM。

expected_total_clients = 500
fixed_client_size_per_round = 100
round_nums = 45

for round_num in range(0, round_nums):
sampled_clients =
np.random.choice(a=federated_train_data[0:expected_total_clients],
size=fixed_client_size_per_round,
replace=False)

state, metrics = iterative_process.next(state, list(sampled_clients))
print('round {:2d}, metrics={}'.format(round_num + 1, metrics))

额外的细节:

OS: MacOS Mojave, 10.14.6
python -V: Python 3.8.5 then downgraded to Python 3.7.9
TF version: 2.4.1
TFF version: 0.18.0
Keras version: 2.4.3

这是正常的内存行为还是 bug?是否有任何重构/提示来优化内存消耗?

最佳答案

问题是 TFF 运行时进程的 执行程序堆栈 中的错误。

下面是完整的细节和错误修复

https://github.com/tensorflow/federated/issues/1215

关于tensorflow-federated - 在每轮参与者数量不同的情况下训练 FL 模型时内存消耗激增,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66557738/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com