gpt4 book ai didi

python - 训练子模型而不是完整模型 Tensorflow Federated

转载 作者:行者123 更新时间:2023-12-04 14:47:27 27 4
gpt4 key购买 nike

我正在尝试修改 TensorFlow Federated 示例。我想从原始模型创建一个子模型,并将新创建的子模型用于训练阶段,然后将权重发送到服务器,以便他更新原始模型。

我知道这不应该在 client_update 中完成,但服务器应该将正确的子模型直接发送到客户端,但现在我更喜欢这样做。

现在我有两个问题:

  1. 似乎我无法像这样在 client_update 函数中创建新模型:
    @tf.function
def client_update(model, dataset, server_message, client_optimizer):
"""Performans client local training of `model` on `dataset`.
Args:
model: A `tff.learning.Model`.
dataset: A 'tf.data.Dataset'.
server_message: A `BroadcastMessage` from server.
client_optimizer: A `tf.keras.optimizers.Optimizer`.
Returns:
A 'ClientOutput`.
"""

model_weights = model.weights

import dropout_model
dropout_model = dropout_model.get_dropoutmodel(model)


initial_weights = server_message.model_weights
tf.nest.map_structure(lambda v, t: v.assign(t), model_weights,
initial_weights)
.....

错误是这个:

ValueError: tf.function-decorated function tried to create variables on non-first call.

创建的模型是这样的:

    def from_original_to_submodel(only_digits=True):
"""The CNN model used in https://arxiv.org/abs/1602.05629.
Args:
only_digits: If True, uses a final layer with 10 outputs, for use with the
digits only EMNIST dataset. If False, uses 62 outputs for the larger
dataset.
Returns:
An uncompiled `tf.keras.Model`.
"""
data_format = 'channels_last'
input_shape = [28, 28, 1]
max_pool = functools.partial(
tf.keras.layers.MaxPooling2D,
pool_size=(2, 2),
padding='same',
data_format=data_format)
conv2d = functools.partial(
tf.keras.layers.Conv2D,
kernel_size=5,
padding='same',
data_format=data_format,
activation=tf.nn.relu)
model = tf.keras.models.Sequential([
conv2d(filters=32, input_shape=input_shape),
max_pool(),
conv2d(filters=64),
max_pool(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(410, activation=tf.nn.relu), #20% dropout
tf.keras.layers.Dense(10 if only_digits else 62),
])
return model

def get_dropoutmodel(model):
keras_model = from_original_to_submodel(only_digits=False)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
return tff.learning.from_keras_model(keras_model, loss=loss, input_spec=model.input_spec)
  1. 更像是一个理论问题。我想像我说的那样训练一个子模型,所以我会采用从服务器 initial_weights 发送的原始模型权重,对于每一层,我会为子模型权重分配一个随机权重子列表。例如,第 6 层的 initial_weights 包含 100 个元素,我的同一层的新子模型只有 40 个元素,我会随机选择带有种子的 40 个元素,进行训练然后发送种子到服务器,这样他就可以选择相同的索引,然后只更新它们。那是对的吗?我的第二个版本是创建仍然 100 个元素(40 个随机和 60 个等于 0),但我认为这会在服务器端聚合时扰乱模型性能。

编辑:

我修改了 client_update_fn 函数,如下所示:

@tff.tf_computation(tf_dataset_type, server_message_type)
def client_update_fn(tf_dataset, server_message):
model = model_fn()
submodel = submodel_fn()
client_optimizer = client_optimizer_fn()
return client_update(model, submodel, tf_dataset, server_message, client_optimizer)

向函数 build_federated_averaging_process 添加一个新参数,如下所示:

def build_federated_averaging_process(
model_fn, submodel_fn,
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0),
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.1)):

main.py 中我这样做了:

def tff_submodel_fn():
keras_model = create_submodel_dropout(only_digits=False)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
return tff.learning.from_keras_model(keras_model, loss=loss, input_spec=train_data.element_type_structure)

iterative_process = simple_fedavg_tff.build_federated_averaging_process(
tff_model_fn, tff_submodel_fn, server_optimizer_fn, client_optimizer_fn)

现在在 client_update 中我可以使用子模型:

@tf.function
def client_update(model, submodel, dataset, server_message, client_optimizer):
"""Performans client local training of `model` on `dataset`.
Args:
model: A `tff.learning.Model`.
dataset: A 'tf.data.Dataset'.
server_message: A `BroadcastMessage` from server.
client_optimizer: A `tf.keras.optimizers.Optimizer`.
Returns:
A 'ClientOutput`.
"""



model_weights = model.weights
initial_weights = server_message.model_weights
submodel_weights = submodel.weights
tf.nest.map_structure(lambda v, t: v.assign(t), submodel_weights,
initial_weights)
num_examples = tf.constant(0, dtype=tf.int32)
loss_sum = tf.constant(0, dtype=tf.float32)

# Explicit use `iter` for dataset is a trick that makes TFF more robust in
# GPU simulation and slightly more performant in the unconventional usage
# of large number of small datasets.
weights_delta = []
testing = False
if not testing:
for batch in iter(dataset):
with tf.GradientTape() as tape:
outputs = model.forward_pass(batch)
grads = tape.gradient(outputs.loss, submodel_weights.trainable)
client_optimizer.apply_gradients(zip(grads, submodel_weights.trainable))
batch_size = tf.shape(batch['x'])[0]
num_examples += batch_size
loss_sum += outputs.loss * tf.cast(batch_size, tf.float32)

weights_delta = tf.nest.map_structure(lambda a, b: a - b,
submodel_weights.trainable,
initial_weights.trainable)
client_weight = tf.cast(num_examples, tf.float32)
return ClientOutput(weights_delta, client_weight, loss_sum / client_weight)

我收到这个错误:

    ValueError: No gradients provided for any variable: ['conv2d_2/kernel:0', 'conv2d_2/bias:0', 'conv2d_3/kernel:0', 'conv2d_3/bias:0', 'dense_2/kernel:0', 'dense_2/bias:0', 'dense_3/kernel:0', 'dense_3/bias:0'].

Fatal Python error: Segmentation fault

Current thread 0x00007f27af18b740 (most recent call first):
File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1853 in _create_c_op
File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 2041 in __init__
File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 3557 in _create_op_internal
File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 599 in _create_op_internal
File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py", line 748 in _apply_op_helper
File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 1276 in delete_iterator
File "virtual-environment/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 549 in __del__

Process finished with exit code 11

目前模型与原始模型相同,我将函数 create_original_fedavg_cnn_model 复制到 create_submodel_dropout 中,所以我不明白哪里出了问题

最佳答案

一般来说,我们不能在 tf.function 中创建变量,因为该方法将在 TFF 计算中重复使用;虽然技术上variables may only be created oncetf.function 中。我们可以看到,在大多数 TFF 库代码中,model 实际上是在 tf.function 之外创建的,并作为参数传入tf.function(例如:https://github.com/tensorflow/federated/blob/44d012f690005ecf9217e3be970a4f8a356e88ed/tensorflow_federated/python/examples/simple_fedavg/simple_fedavg_tff.py#L101)。另一种可能的调查可能是 tf.init_scope上下文,但请务必完整阅读有关警告和行为的所有文档。

TFF 有一个新的通信原语,叫做 tff.federated_select这可能在这里很有帮助。内在函数附带两个教程:

  1. Sending Different Data To Particular Clients With tff.federated_select其中专门讨论了通信原语。
  2. Client-efficient large-model federated learning via federated_select and sparse aggregation它演示了使用 federated_select 进行线性回归的联邦学习;并证明了“稀疏聚合”的必要性,即您通过填充零确定的困难。

关于python - 训练子模型而不是完整模型 Tensorflow Federated,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/69767043/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com