gpt4 book ai didi

keras - 非常好的验证准确性但糟糕的预测

转载 作者:行者123 更新时间:2023-12-04 12:17:18 27 4
gpt4 key购买 nike

我正在构建一个 keras 模型来对猫和狗进行分类。我使用了具有瓶颈特征的迁移学习和 vgg 模型的微调。现在我得到了非常好的验证准确度,比如 97%,但是当我开始预测时,我在分类报告和混淆矩阵方面得到了非常糟糕的结果。可能是什么问题呢?

这是微调的代码和我得到的结果

base_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(150,150,3))
print('Model loaded.')

# build a classifier model to put on top of the convolutional model
top_model = Sequential()
top_model.add(Flatten(input_shape=base_model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(2, activation='sigmoid'))

# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(top_model_weights_path)

# add the model on top of the convolutional base
# model.add(top_model)
model = Model(inputs=base_model.input, outputs=top_model(base_model.output))

# set the first 25 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
for layer in model.layers[:15]:
layer.trainable = False

# compile the model with a SGD/momentum optimizer
# and a very slow learning rate.
model.compile(loss='binary_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])

# prepare data augmentation configuration
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')

model.summary()

# fine-tune the model
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size,
verbose=2)
scores=model.evaluate_generator(generator=validation_generator,
steps=nb_validation_samples // batch_size)
print("Accuracy = ", scores[1])

Y_pred = model.predict_generator(validation_generator, nb_validation_samples // batch_size)

y_pred = np.argmax(Y_pred, axis=1)

print('Confusion Matrix')

print(confusion_matrix(validation_generator.classes, y_pred))

print('Classification Report')

target_names = ['Cats', 'Dogs']

print(classification_report(validation_generator.classes, y_pred, target_names=target_names))
model.save("model_tuned.h5")

准确度 = 0.974375

混淆矩阵
[[186 214]
[199201]]

分类报告
          precision    recall  f1-score   support

Cats 0.48 0.47 0.47 400
Dogs 0.48 0.50 0.49 400

微观平均 0.48 0.48 0.48 800
宏观平均 0.48 0.48 0.48 800
加权平均 0.48 0.48 0.48 800

最佳答案

我认为问题在于您应该在验证生成器中添加 shuffle = False

validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical',
shuffle=False)

问题是默认行为是对图像进行混洗,因此标签顺序
validation_generator.classes

与生成器不匹配

关于keras - 非常好的验证准确性但糟糕的预测,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56815476/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com