gpt4 book ai didi

tensorflow - 与 tensorflow.keras 相比,使用 keras 在 mnist 上的测试准确度明显更高

转载 作者:行者123 更新时间:2023-12-03 14:42:27 27 4
gpt4 key购买 nike

我正在用一个基本示例验证我的 TensorFlow (v2.2.0)、Cuda (10.1) 和 cudnn (libcudnn7-dev_7.6.5.32-1+cuda10.1_amd64.deb) 并且我得到了奇怪的结果......

当在 Keras 中运行以下示例时,如 https://keras.io/examples/mnist_cnn/ 所示我得到了 ~99% 的 acc @validation。当我调整通过 TensorFlow 运行的导入时,我只得到 86%。

我可能忘记了什么。

使用 tensorflow 运行:

from __future__ import print_function

import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import backend as K

batch_size = 128
num_classes = 10
epochs = 12

# input image dimensions
img_rows, img_cols = 28, 28

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.optimizers.Adadelta(),
metrics=['accuracy'])

model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

可悲的是,我得到以下输出:
Epoch 2/12
469/469 [==============================] - 3s 6ms/step - loss: 2.2245 - accuracy: 0.2633 - val_loss: 2.1755 - val_accuracy: 0.4447
Epoch 3/12
469/469 [==============================] - 3s 7ms/step - loss: 2.1485 - accuracy: 0.3533 - val_loss: 2.0787 - val_accuracy: 0.5147
Epoch 4/12
469/469 [==============================] - 3s 6ms/step - loss: 2.0489 - accuracy: 0.4214 - val_loss: 1.9538 - val_accuracy: 0.6021
Epoch 5/12
469/469 [==============================] - 3s 6ms/step - loss: 1.9224 - accuracy: 0.4845 - val_loss: 1.7981 - val_accuracy: 0.6611
Epoch 6/12
469/469 [==============================] - 3s 6ms/step - loss: 1.7748 - accuracy: 0.5376 - val_loss: 1.6182 - val_accuracy: 0.7039
Epoch 7/12
469/469 [==============================] - 3s 6ms/step - loss: 1.6184 - accuracy: 0.5750 - val_loss: 1.4296 - val_accuracy: 0.7475
Epoch 8/12
469/469 [==============================] - 3s 7ms/step - loss: 1.4612 - accuracy: 0.6107 - val_loss: 1.2484 - val_accuracy: 0.7719
Epoch 9/12
469/469 [==============================] - 3s 6ms/step - loss: 1.3204 - accuracy: 0.6402 - val_loss: 1.0895 - val_accuracy: 0.7945
Epoch 10/12
469/469 [==============================] - 3s 6ms/step - loss: 1.2019 - accuracy: 0.6650 - val_loss: 0.9586 - val_accuracy: 0.8097
Epoch 11/12
469/469 [==============================] - 3s 7ms/step - loss: 1.1050 - accuracy: 0.6840 - val_loss: 0.8552 - val_accuracy: 0.8216
Epoch 12/12
469/469 [==============================] - 3s 7ms/step - loss: 1.0253 - accuracy: 0.7013 - val_loss: 0.7734 - val_accuracy: 0.8337
Test loss: 0.7734305262565613
Test accuracy: 0.8337000012397766

远不及我导入 Keras 时的 99.25%。
我错过了什么?

最佳答案

keras 和 tensorflow.keras 之间优化器参数的差异

所以问题的关键在于 Keras 和 Tensorflow 中 Adadelta 优化器的不同默认参数。具体来说,不同的学习率。我们可以通过简单的检查看到这一点。使用 Keras 版本的代码,print(keras.optimizers.Adadelta().get_config())输出

{'learning_rate': 1.0, 'rho': 0.95, 'decay': 0.0, 'epsilon': 1e-07}

在 Tensorflow 版本中, print(tf.optimizers.Adadelta().get_config()给我们
{'name': 'Adadelta', 'learning_rate': 0.001, 'decay': 0.0, 'rho': 0.95, 'epsilon': 1e-07}

正如我们所见,Adadelta 优化器的学习率之间存在差异。 Keras 的默认学习率为 1.0而 Tensorflow 的默认学习率为 0.001 (与他们的其他优化器一致)。

更高学习率的影响

由于 Keras 版本的 Adadelta 优化器具有更大的学习率,因此收敛速度更快,并在 12 个 epoch 内实现了高精度,而 Tensorflow Adadelta 优化器需要更长的训练时间。如果增加训练 epoch 的数量,Tensorflow 模型也有可能达到 99% 的准确率。

修复

但不是增加训练时间,我们可以通过将 Adadelta 的学习率更改为 1.0 来简单地初始化 Tensorflow 模型,使其行为与 Keras 模型类似。 . IE。
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.optimizers.Adadelta(learning_rate=1.0), # Note the new learning rate
metrics=['accuracy'])

进行此更改后,我们将在 Tensorflow 上获得以下性能:
Epoch 12/12
60000/60000 [==============================] - 102s 2ms/sample - loss: 0.0287 - accuracy: 0.9911 - val_loss: 0.0291 - val_accuracy: 0.9907
Test loss: 0.029134796149221757
Test accuracy: 0.9907

这接近于所需的 99.25% 准确度。

附言顺便说一句,Keras 和 Tensorflow 之间的不同默认参数似乎是一个已知问题,该问题已修复但随后又恢复了:
https://github.com/keras-team/keras/pull/12841软件开发很难。

关于tensorflow - 与 tensorflow.keras 相比,使用 keras 在 mnist 上的测试准确度明显更高,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62033143/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com