gpt4 book ai didi

tensorflow - tensorflow 联合训练和评估期间的 MSE 误差不同

转载 作者:行者123 更新时间:2023-11-30 08:39:29 24 4
gpt4 key购买 nike

我正在联合 tensorflow 中实现回归模型。我从本教程中使用的 keras 简单模型开始:https://www.tensorflow.org/tutorials/keras/regression

我更改了模型以使用联邦学习。这是我的模型:

import pandas as pd
import tensorflow as tf

from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_federated as tff

dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")

column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)

df = raw_dataset.copy()
df = df.dropna()
dfs = [x for _, x in df.groupby('Origin')]


datasets = []
targets = []
for dataframe in dfs:
target = dataframe.pop('MPG')

from sklearn.preprocessing import StandardScaler
standard_scaler_x = StandardScaler(with_mean=True, with_std=True)
normalized_values = standard_scaler_x.fit_transform(dataframe.values)

dataset = tf.data.Dataset.from_tensor_slices(({ 'x': normalized_values, 'y': target.values}))
train_dataset = dataset.shuffle(len(dataframe)).repeat(10).batch(20)
test_dataset = dataset.shuffle(len(dataframe)).batch(1)
datasets.append(train_dataset)


def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[7]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
return model
dataset_path


import collections


model = build_model()

sample_batch = tf.nest.map_structure(
lambda x: x.numpy(), iter(datasets[0]).next())

def loss_fn_Federated(y_true, y_pred):
return tf.reduce_mean(tf.keras.losses.MSE(y_true, y_pred))

def create_tff_model():
keras_model_clone = tf.keras.models.clone_model(model)
# adam = keras.optimizers.Adam()
adam = tf.keras.optimizers.SGD(0.002)
keras_model_clone.compile(optimizer=adam, loss='mse', metrics=[tf.keras.metrics.MeanSquaredError()])
return tff.learning.from_compiled_keras_model(keras_model_clone, sample_batch)

print("Create averaging process")
# This command builds all the TensorFlow graphs and serializes them:
iterative_process = tff.learning.build_federated_averaging_process(model_fn=create_tff_model)

print("Initzialize averaging process")
state = iterative_process.initialize()

print("Start iterations")
for _ in range(10):
state, metrics = iterative_process.next(state, datasets)
print('metrics={}'.format(metrics))
Start iterations
metrics=<mean_squared_error=95.8644027709961,loss=96.28633880615234>
metrics=<mean_squared_error=9.511247634887695,loss=9.522096633911133>
metrics=<mean_squared_error=8.26853084564209,loss=8.277074813842773>
metrics=<mean_squared_error=7.975323677062988,loss=7.9771647453308105>
metrics=<mean_squared_error=7.618809700012207,loss=7.644164562225342>
metrics=<mean_squared_error=7.347906112670898,loss=7.340310096740723>
metrics=<mean_squared_error=7.210267543792725,loss=7.210223197937012>
metrics=<mean_squared_error=7.045553207397461,loss=7.045469760894775>
metrics=<mean_squared_error=6.861278533935547,loss=6.878870487213135>
metrics=<mean_squared_error=6.80275297164917,loss=6.817670822143555>
evaluation = tff.learning.build_federated_evaluation(model_fn=create_tff_model)


test_metrics = evaluation(state.model, datasets)
print(test_metrics)
<mean_squared_error=27.308320999145508,loss=27.19877052307129>

我很困惑为什么 10 次迭代后评估的 mse 对于训练集来说更高,而迭代过程返回的 mse 却小得多。我在这里做错了什么?在tensorflow中fml的实现中是否隐藏了一些东西?有人可以给我解释一下吗?

最佳答案

您实际上在联邦学习中发现了一个非常有趣的现象。特别是,这里需要问的问题是:训练指标是如何计算的?

训练指标通常是在本地训练期间计算的;因此,它们是在客户端拟合其本地数据时计算的;在 TFF 中,它们是在执行每个本地步骤之前计算的 - 这种情况发生 here在前向传递调用期间。如果您想象极端情况,即仅在每个客户端的一轮训练结束时计算指标,您会清楚地看到一件事 - 客户端正在报告代表的指标它与他的本地数据的吻合程度如何。

然而,联邦学习必须在每轮训练结束时生成一个单一的全局模型 - 在联邦平均中,这些局部模型在参数空间中一起平均。在一般情况下,尚不清楚如何直观地解释这样的步骤 - 参数空间中非线性模型的平均值不会给您平均预测或类似的结果。

联合评估采用此平均模型,并对每个客户端运行本地评估,根本不拟合本地数据。因此,如果您的客户端数据集具有截然不同的分布,则您应该预期从联合评估返回的指标与从一轮联合训练返回的指标有很大不同 - 联合平均正在报告期间收集的指标适应本地数据的过程,而联合评估是在对所有这些本地训练的模型进行平均后收集的报告指标。

事实上,如果您交叉调用迭代过程的 next 函数和评估函数,您将看到如下模式:

train metrics=<mean_squared_error=88.22489929199219,loss=88.6319351196289>
eval metrics=<mean_squared_error=33.69473648071289,loss=33.55160140991211>
train metrics=<mean_squared_error=8.873666763305664,loss=8.882776260375977>
eval metrics=<mean_squared_error=29.235883712768555,loss=29.13833236694336>
train metrics=<mean_squared_error=7.932246208190918,loss=7.918393611907959>
eval metrics=<mean_squared_error=27.9038028717041,loss=27.866817474365234>
train metrics=<mean_squared_error=7.573018550872803,loss=7.576478958129883>
eval metrics=<mean_squared_error=27.600923538208008,loss=27.561887741088867>
train metrics=<mean_squared_error=7.228050708770752,loss=7.224897861480713>
eval metrics=<mean_squared_error=27.46322250366211,loss=27.36537742614746>
train metrics=<mean_squared_error=7.049572944641113,loss=7.03688907623291>
eval metrics=<mean_squared_error=26.755760192871094,loss=26.719152450561523>
train metrics=<mean_squared_error=6.983217716217041,loss=6.954374313354492>
eval metrics=<mean_squared_error=26.756895065307617,loss=26.647253036499023>
train metrics=<mean_squared_error=6.909178256988525,loss=6.923810005187988>
eval metrics=<mean_squared_error=27.047882080078125,loss=26.86684799194336>
train metrics=<mean_squared_error=6.8190460205078125,loss=6.79202938079834>
eval metrics=<mean_squared_error=26.209386825561523,loss=26.10053062438965>
train metrics=<mean_squared_error=6.7200140953063965,loss=6.737307071685791>
eval metrics=<mean_squared_error=26.682661056518555,loss=26.64984703063965>

也就是说,您的联合评估也在下降,只是比您的训练指标慢得多 - 有效地测量客户数据集中的变化。您可以通过运行来验证这一点:

eval_metrics = evaluation(state.model, [datasets[0]])
print('eval metrics on 0th dataset={}'.format(eval_metrics))
eval_metrics = evaluation(state.model, [datasets[1]])
print('eval metrics on 1st dataset={}'.format(eval_metrics))
eval_metrics = evaluation(state.model, [datasets[2]])
print('eval metrics on 2nd dataset={}'.format(eval_metrics))

你会看到类似的结果

eval metrics on 0th dataset=<mean_squared_error=9.426984786987305,loss=9.431192398071289>
eval metrics on 1st dataset=<mean_squared_error=34.96992111206055,loss=34.96992492675781>
eval metrics on 2nd dataset=<mean_squared_error=72.94075775146484,loss=72.88787841796875>

因此您可以看到,您的平均模型在这三个数据集上的性能显着不同。

最后一点:您可能会注意到,evaluate 函数的最终结果不是您的三个损失的平均值 - 这是因为 evaluate 函数将按示例加权,而不是客户端加权——也就是说,拥有更多数据的客户端平均获得更多权重。

希望这有帮助!

关于tensorflow - tensorflow 联合训练和评估期间的 MSE 误差不同,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59193069/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com