gpt4 book ai didi

python - 在 Keras 中使用自定义损失函数时出现批量大小问题

转载 作者:行者123 更新时间:2023-11-28 19:00:30 25 4
gpt4 key购买 nike

我正在通过定义自定义损失函数对标准神经网络进行轻微修改。自定义损失函数不仅依赖于y_true和y_pred,还依赖于训练数据。我使用 here 中描述的包装解决方案实现了它.

具体来说,我想定义一个自定义损失函数,它是标准 mse 加上输入和 y_pred 的平方之间的 mse:

def custom_loss(x_true)
def loss(y_true, y_pred):
return K.mean(K.square(y_pred - y_true) + K.square(y_true - x_true))
return loss

然后我使用

编译模型
model_custom.compile(loss = custom_loss( x_true=training_data ), optimizer='adam')

使用模型拟合

model_custom.fit(training_data, training_label, epochs=100, batch_size = training_data.shape[0])

以上都很好,因为批量大小实际上是所有训练样本的数量。

但是如果我在有 1000 个训练样本时设置不同的 batch_size(例如 10),就会出现错误

Incompatible shapes: [1000] vs. [10].

Keras 似乎能够根据批量大小自动调整其损失函数的输入大小,但对于自定义损失函数则不能这样做。

你知道如何解决这个问题吗?

谢谢!

============================================= ===========================

* 更新:批量大小问题解决了,但是又出现了一个问题

谢谢你,Ori,关于连接输入和输出层的建议!从代码可以在任何批量大小下运行的意义上说,它“有效”。但是,似乎训练新模型的结果是错误的......下面是演示问题的代码的简化版本:

import numpy as np
import scipy.io
import keras
from keras import backend as K
from keras.models import Model
from keras.layers import Input, Dense, Activation
from numpy.random import seed
from tensorflow import set_random_seed

def custom_loss(y_true, y_pred): # this is essentially the mean_square_error
mse = K.mean( K.square( y_pred[:,2] - y_true ) )
return mse

# set the seeds so that we get the same initialization across different trials
seed_numpy = 0
seed_tensorflow = 0

# generate data of x = [ y^3 y^2 ]
y = np.random.rand(5000+1000,1) * 2 # generate 5000 training and 1000 testing samples
x = np.concatenate( ( np.power(y, 3) , np.power(y, 2) ) , axis=1 )

training_data = x[0:5000:1,:]
training_label = y[0:5000:1]
testing_data = x[5000:6000:1,:]
testing_label = y[5000:6000:1]

# build the standard neural network with one hidden layer
seed(seed_numpy)
set_random_seed(seed_tensorflow)

input_standard = Input(shape=(2,)) # input
hidden_standard = Dense(10, activation='relu', input_shape=(2,))(input_standard) # hidden layer
output_standard = Dense(1, activation='linear')(hidden_standard) # output layer

model_standard = Model(inputs=[input_standard], outputs=[output_standard]) # build the model
model_standard.compile(loss='mean_squared_error', optimizer='adam') # compile the model
model_standard.fit(training_data, training_label, epochs=50, batch_size = 500) # train the model
testing_label_pred_standard = model_standard.predict(testing_data) # make prediction

# get the mean squared error
mse_standard = np.sum( np.power( testing_label_pred_standard - testing_label , 2 ) ) / 1000

# build the neural network with the custom loss
seed(seed_numpy)
set_random_seed(seed_tensorflow)

input_custom = Input(shape=(2,)) # input
hidden_custom = Dense(10, activation='relu', input_shape=(2,))(input_custom) # hidden layer
output_custom_temp = Dense(1, activation='linear')(hidden_custom) # output layer
output_custom = keras.layers.concatenate([input_custom, output_custom_temp])

model_custom = Model(inputs=[input_custom], outputs=[output_custom]) # build the model
model_custom.compile(loss = custom_loss, optimizer='adam') # compile the model
model_custom.fit(training_data, training_label, epochs=50, batch_size = 500) # train the model
testing_label_pred_custom = model_custom.predict(testing_data) # make prediction

# get the mean squared error
mse_custom = np.sum( np.power( testing_label_pred_custom[:,2:3:1] - testing_label , 2 ) ) / 1000

# compare the result
print( [ mse_standard , mse_custom ] )

基本上,我有一个标准的单隐藏层神经网络和一个自定义的单隐藏层神经网络,其输出层与输入层连接在一起。出于测试目的,我没有在自定义损失函数中使用级联输入层,因为我想看看自定义网络是否可以重现标准神经网络。由于自定义损失函数等同于标准的“mean_squared_error”损失,因此两个网络应该具有相同的训练结果(我还重置了随机种子以确保它们具有相同的初始化)。

然而,训练结果却大相径庭。似乎连接使训练过程不同?有什么想法吗?

再次感谢您的帮助!

最终更新:Ori 连接输入和输出层的方法有效,并通过使用生成器进行了验证。谢谢!!

最佳答案

问题在于,在编译模型时,您将 x_true 设置为静态张量,大小为所有样本。虽然 keras 损失函数的输入是 y_true 和 y_pred,其中每个的大小都是 [batch_size, :]

如我所见,有 2 个选项可以解决这个问题,第一个是使用生成器创建批处理,这样您就可以控制每次评估哪些索引,以及损失函数您可以对 x_true 张量进行切片以适合正在评估的样本:

def custom_loss(x_true)
def loss(y_true, y_pred):
x_true_samples = relevant_samples(x_true)
return K.mean(K.square(y_pred - y_true) + K.square(y_true - x_true_samples))
return loss

这个解决方案可能很复杂,我建议的是一个更简单的解决方法 -
将输入层与输出层连接起来,这样您的新输出将采用 original_output , input 的形式。

现在您可以使用新的修改后的损失函数:

def loss(y_true, y_pred):
return K.mean(K.square(y_pred[:,:output_shape] - y_true[:,:output_shape]) +
K.square(y_true[:,:output_shape] - y_pred[:,outputshape:))

现在您的新损失函数将同时考虑输入数据和预测。

编辑:
请注意,当您设置种子时,您的模型并不完全相同,并且由于您没有使用生成器,所以您让 keras 选择批处理,对于不同的模型,他可能会选择不同的样本。
由于您的模型不收敛,不同的样本可能会导致不同的结果。

我在您的代码中添加了一个生成器,以验证我们选择用于训练的样本,现在您可以看到两个结果相同:

def custom_loss(y_true, y_pred): # this is essentially the mean_square_error
mse = keras.losses.mean_squared_error(y_true, y_pred[:,2])
return mse


def generator(x, y, batch_size):
curIndex = 0
batch_x = np.zeros((batch_size,2))
batch_y = np.zeros((batch_size,1))
while True:
for i in range(batch_size):
batch_x[i] = x[curIndex,:]
batch_y[i] = y[curIndex,:]
i += 1;
if i == 5000:
i = 0
yield batch_x, batch_y

# set the seeds so that we get the same initialization across different trials
seed_numpy = 0
seed_tensorflow = 0

# generate data of x = [ y^3 y^2 ]
y = np.random.rand(5000+1000,1) * 2 # generate 5000 training and 1000 testing samples
x = np.concatenate( ( np.power(y, 3) , np.power(y, 2) ) , axis=1 )

training_data = x[0:5000:1,:]
training_label = y[0:5000:1]
testing_data = x[5000:6000:1,:]
testing_label = y[5000:6000:1]

batch_size = 32



# build the standard neural network with one hidden layer
seed(seed_numpy)
set_random_seed(seed_tensorflow)

input_standard = Input(shape=(2,)) # input
hidden_standard = Dense(10, activation='relu', input_shape=(2,))(input_standard) # hidden layer
output_standard = Dense(1, activation='linear')(hidden_standard) # output layer

model_standard = Model(inputs=[input_standard], outputs=[output_standard]) # build the model
model_standard.compile(loss='mse', optimizer='adam') # compile the model
#model_standard.fit(training_data, training_label, epochs=50, batch_size = 10) # train the model
model_standard.fit_generator(generator(training_data,training_label,batch_size), steps_per_epoch= 32, epochs= 100)
testing_label_pred_standard = model_standard.predict(testing_data) # make prediction

# get the mean squared error
mse_standard = np.sum( np.power( testing_label_pred_standard - testing_label , 2 ) ) / 1000

# build the neural network with the custom loss
seed(seed_numpy)
set_random_seed(seed_tensorflow)


input_custom = Input(shape=(2,)) # input
hidden_custom = Dense(10, activation='relu', input_shape=(2,))(input_custom) # hidden layer
output_custom_temp = Dense(1, activation='linear')(hidden_custom) # output layer
output_custom = keras.layers.concatenate([input_custom, output_custom_temp])

model_custom = Model(inputs=input_custom, outputs=output_custom) # build the model
model_custom.compile(loss = custom_loss, optimizer='adam') # compile the model
#model_custom.fit(training_data, training_label, epochs=50, batch_size = 10) # train the model
model_custom.fit_generator(generator(training_data,training_label,batch_size), steps_per_epoch= 32, epochs= 100)
testing_label_pred_custom = model_custom.predict(testing_data)

# get the mean squared error
mse_custom = np.sum( np.power( testing_label_pred_custom[:,2:3:1] - testing_label , 2 ) ) / 1000

# compare the result
print( [ mse_standard , mse_custom ] )

关于python - 在 Keras 中使用自定义损失函数时出现批量大小问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53235029/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com