gpt4 book ai didi

python - 使用正则化时,手动计算的验证损失与报告的 val_loss 不同

转载 作者:行者123 更新时间:2023-12-01 08:36:03 26 4
gpt4 key购买 nike

当我在自定义回调中手动计算验证损失时,结果与使用 L2 内核正则化时 keras 报告的结果不同。

示例代码:

class ValidationCallback(Callback):
def __init__(self, validation_x, validation_y):
super(ValidationCallback, self).__init__()
self.validation_x = validation_x
self.validation_y = validation_y

def on_epoch_end(self, epoch, logs=None):
# What am I missing in this loss calculation that keras is doing?
validation_y_predicted = self.model.predict(self.validation_x)
print("My validation loss: %.4f" % K.eval(K.mean(mean_squared_error(self.validation_y, validation_y_predicted))))


input = Input(shape=(1024,))
hidden = Dense(1024, kernel_regularizer=regularizers.l2())(input)
output = Dense(1024, kernel_regularizer=regularizers.l2())(hidden)

model = Model(inputs=[input], outputs=output)

optimizer = RMSprop()
model.compile(loss='mse', optimizer=optimizer)

model.fit(x=x_train,
y=y_train,
callbacks=[ValidationCallback(x_validation, y_validation)],
validation_data=(x_validation, y_validation))

打印:

10000/10000 [==============================] - 2s 249us/step - loss: 1.3125 - val_loss: 0.1250 My validation loss: 0.0861

我需要做什么才能在回调中计算相同的验证损失?

最佳答案

这是预期的行为。 L2正则化通过添加惩罚项(权重平方和)来修改损失函数,以减少泛化误差。

要在回调中计算相同的验证损失,您需要获取每层的权重并计算它们的平方和。 regularizers.l2 中的参数 l是每层的正则化系数。

话虽如此,您可以按如下方式匹配示例的验证损失:

from keras.layers import Dense, Input
from keras import regularizers
import keras.backend as K
from keras.losses import mean_squared_error
from keras.models import Model
from keras.callbacks import Callback
from keras.optimizers import RMSprop
import numpy as np


class ValidationCallback(Callback):
def __init__(self, validation_x, validation_y, lambd):
super(ValidationCallback, self).__init__()
self.validation_x = validation_x
self.validation_y = validation_y
self.lambd = lambd

def on_epoch_end(self, epoch, logs=None):
validation_y_predicted = self.model.predict(self.validation_x)

# Compute regularization term for each layer
weights = self.model.trainable_weights
reg_term = 0
for i, w in enumerate(weights):
if i % 2 == 0: # weights from layer i // 2
w_f = K.flatten(w)
reg_term += self.lambd[i // 2] * K.sum(K.square(w_f))

mse_loss = K.mean(mean_squared_error(self.validation_y, validation_y_predicted))
loss = mse_loss + K.cast(reg_term, 'float64')

print("My validation loss: %.4f" % K.eval(loss))


lambd = [0.01, 0.01]
input = Input(shape=(1024,))
hidden = Dense(1024, kernel_regularizer=regularizers.l2(lambd[0]))(input)
output = Dense(1024, kernel_regularizer=regularizers.l2(lambd[1]))(hidden)
model = Model(inputs=[input], outputs=output)
optimizer = RMSprop()
model.compile(loss='mse', optimizer=optimizer)

x_train = np.ones((2, 1024))
y_train = np.random.rand(2, 1024)
x_validation = x_train
y_validation = y_train

model.fit(x=x_train,
y=y_train,
callbacks=[ValidationCallback(x_validation, y_validation, lambd)],
validation_data=(x_validation, y_validation))

关于python - 使用正则化时,手动计算的验证损失与报告的 val_loss 不同,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53715409/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com