gpt4 book ai didi

python - 当我用 tf.keras 替换 keras 时模型坏了

转载 作者:行者123 更新时间:2023-12-04 02:35:39 24 4
gpt4 key购买 nike

当我尝试使用 keras 构建一个简单的自动编码器时,我发现 keras 和 tf.keras 之间有些奇怪。

tf.__version__

2.2.0

(x_train,_), (x_test,_) = tf.keras.datasets.mnist.load_data()

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), 784))
x_test = x_test.reshape((len(x_test), 784)) # None, 784

原图

plt.imshow(x_train[0].reshape(28, 28), cmap='gray')

enter image description here

import keras
# import tensorflow.keras as keras

my_autoencoder = keras.models.Sequential([
keras.layers.Dense(64, input_shape=(784, ), activation='relu'),
keras.layers.Dense(784, activation='sigmoid')
])
my_autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

my_autoencoder.fit(x_train, x_train, epochs=10, shuffle=True, validation_data=(x_test, x_test))

训练

Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 7s 112us/step - loss: 0.2233 - val_loss: 0.1670
Epoch 2/10
60000/60000 [==============================] - 7s 111us/step - loss: 0.1498 - val_loss: 0.1337
Epoch 3/10
60000/60000 [==============================] - 7s 110us/step - loss: 0.1254 - val_loss: 0.1152
Epoch 4/10
60000/60000 [==============================] - 7s 110us/step - loss: 0.1103 - val_loss: 0.1032
Epoch 5/10
60000/60000 [==============================] - 7s 110us/step - loss: 0.1010 - val_loss: 0.0963
Epoch 6/10
60000/60000 [==============================] - 7s 109us/step - loss: 0.0954 - val_loss: 0.0919
Epoch 7/10
60000/60000 [==============================] - 7s 109us/step - loss: 0.0917 - val_loss: 0.0889
Epoch 8/10
60000/60000 [==============================] - 7s 110us/step - loss: 0.0890 - val_loss: 0.0866
Epoch 9/10
60000/60000 [==============================] - 7s 110us/step - loss: 0.0870 - val_loss: 0.0850
Epoch 10/10
60000/60000 [==============================] - 7s 109us/step - loss: 0.0853 - val_loss: 0.0835

keras解码后的图片

temp = my_autoencoder.predict(x_train)

plt.imshow(temp[0].reshape(28, 28), cmap='gray')

enter image description here

到目前为止,一切都和预期的一样正常,但是当我用 tf.keras 替换 keras 时出现了一些奇怪的事情

# import keras
import tensorflow.keras as keras
my_autoencoder = keras.models.Sequential([
keras.layers.Dense(64, input_shape=(784, ), activation='relu'),
keras.layers.Dense(784, activation='sigmoid')
])
my_autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

my_autoencoder.fit(x_train, x_train, epochs=10, shuffle=True, validation_data=(x_test, x_test))

训练

Epoch 1/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.6952 - val_loss: 0.6940
Epoch 2/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.6929 - val_loss: 0.6918
Epoch 3/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.6907 - val_loss: 0.6896
Epoch 4/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.6885 - val_loss: 0.6873
Epoch 5/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.6862 - val_loss: 0.6848
Epoch 6/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.6835 - val_loss: 0.6818
Epoch 7/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.6802 - val_loss: 0.6782
Epoch 8/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.6763 - val_loss: 0.6737
Epoch 9/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.6714 - val_loss: 0.6682
Epoch 10/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.6652 - val_loss: 0.6612

使用 tf.keras 解码后的图像

temp = my_autoencoder.predict(x_train)

plt.imshow(temp[0].reshape(28, 28), cmap='gray')

enter image description here我找不到任何错误,有人知道为什么吗?

最佳答案

真正的罪魁祸首是 keras.Adadelta 使用的默认学习率对比tf.keras.Adadelta : 11e-4 - 见下文。 kerastf.keras 的实现确实有点不同,但结果的差异不会像您观察到的那样显着(仅在不同的配置下,例如学习率).

您可以通过运行 print(model.optimizer.get_config()) 在您的原始代码中确认这一点。

import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow.keras as keras

(x_train, _), (x_test, _) = tf.keras.datasets.mnist.load_data()

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), 784))
x_test = x_test.reshape((len(x_test), 784)) # None, 784

###############################################################################
model = keras.models.Sequential([
keras.layers.Dense(64, input_shape=(784, ), activation='relu'),
keras.layers.Dense(784, activation='sigmoid')
])
model.compile(optimizer=keras.optimizers.Adadelta(learning_rate=1),
loss='binary_crossentropy')

model.fit(x_train, x_train, epochs=10, shuffle=True,
validation_data=(x_test, x_test))

###############################################################################
temp = model.predict(x_train)
plt.imshow(temp[0].reshape(28, 28), cmap='gray')
Epoch 1/10
1875/1875 [==============================] - 4s 2ms/step - loss: 0.2229 - val_loss: 0.1668
Epoch 2/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.1497 - val_loss: 0.1337
Epoch 3/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.1253 - val_loss: 0.1152
Epoch 4/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.1103 - val_loss: 0.1033
Epoch 5/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.1009 - val_loss: 0.0962
Epoch 6/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0952 - val_loss: 0.0916
Epoch 7/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0914 - val_loss: 0.0885
Epoch 8/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0886 - val_loss: 0.0862
Epoch 9/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0865 - val_loss: 0.0844
Epoch 10/10
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0849 - val_loss: 0.0830

关于python - 当我用 tf.keras 替换 keras 时模型坏了,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62043889/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com