gpt4 book ai didi

machine-learning - 验证准确性并没有提高训练 ResNet50

转载 作者:行者123 更新时间:2023-11-30 08:36:00 24 4
gpt4 key购买 nike

我正在使用数据分析对 ResNet50 模型进行微调,以进行人脸识别,但观察到模型的准确性有所提高,但从一开始的验证准确性并没有提高,我不明白哪里出了问题,请检查我的代码。

我尝试过操作我添加的顶层,但没有帮助。

import os
os.environ['KERAS_BACKEND'] = 'tensorflow'
from keras.applications import ResNet50
from keras.models import Sequential
from keras.layers import Dense, Flatten, GlobalAveragePooling2D,Input,Dropout

num_classes = 13

base = ResNet50(include_top=False, weights='resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5',input_tensor=Input(shape=(100,100,3)))
from keras.models import Model

x = base.output

#x = GlobalAveragePooling2D()(x)

x = Flatten()(x)

x = Dense(1024, activation = 'relu')(x)

x = Dropout(0.5)(x)

predictions = Dense(13, activation='softmax')(x)

model = Model(inputs=base.input, outputs=predictions)

for layers in base.layers:
layers.trainable= False

model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
from keras.preprocessing.image import ImageDataGenerator

train_generator = ImageDataGenerator(featurewise_center=True,
rotation_range=20,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)

test_generator = ImageDataGenerator(rescale=1./255)

from sklearn.model_selection import train_test_split

x_train,x_test,y_train,y_test = train_test_split(image,label,test_size=0.2,shuffle=True,random_state=0)

train_generator.fit(x_train)
test_generator.fit(x_test)

model.fit_generator(train_generator.flow(x_train,y_train,batch_size=32),
steps_per_epoch =10,epochs=50,
validation_data=test_generator.flow(x_test,y_test))

输出:

Epoch 19/50
10/10 [==============================] - 105s 10s/step - loss: 1.9387 - acc: 0.3803 - val_loss: 2.6820 - val_acc: 0.0709
Epoch 20/50
10/10 [==============================] - 107s 11s/step - loss: 2.0725 - acc: 0.3230 - val_loss: 2.6689 - val_acc: 0.0709
Epoch 21/50
10/10 [==============================] - 103s 10s/step - loss: 1.8884 - acc: 0.3375 - val_loss: 2.6677 - val_acc: 0.0709
Epoch 22/50
10/10 [==============================] - 95s 10s/step - loss: 1.8265 - acc: 0.4051 - val_loss: 2.6799 - val_acc: 0.0709
Epoch 23/50
10/10 [==============================] - 100s 10s/step - loss: 1.8346 - acc: 0.3812 - val_loss: 2.6929 - val_acc: 0.0709
Epoch 24/50
10/10 [==============================] - 102s 10s/step - loss: 1.9547 - acc: 0.3352 - val_loss: 2.6952 - val_acc: 0.0709
Epoch 25/50
10/10 [==============================] - 104s 10s/step - loss: 1.9472 - acc: 0.3281 - val_loss: 2.7168 - val_acc: 0.0709
Epoch 26/50
10/10 [==============================] - 103s 10s/step - loss: 1.8818 - acc: 0.4063 - val_loss: 2.7071 - val_acc: 0.0709
Epoch 27/50
10/10 [==============================] - 106s 11s/step - loss: 1.8053 - acc: 0.4000 - val_loss: 2.7059 - val_acc: 0.0709
Epoch 28/50
10/10 [==============================] - 104s 10s/step - loss: 1.9601 - acc: 0.3493 - val_loss: 2.7104 - val_acc: 0.0709

最佳答案

发生这种情况是因为我只是直接添加全连接层而没有先对其进行训练,正如 keras 博客中提到的, https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html

in order to perform fine-tuning, all layers should start with properly trained weights: for instance you should not slap a randomly initialized fully-connected network on top of a pre-trained convolutional base. This is because the large gradient updates triggered by the randomly initialized weights would wreck the learned weights in the convolutional base. In our case this is why we first train the top-level classifier, and only then start fine-tuning convolutional weights alongside it.

所以答案是首先单独训练顶层模型,然后创建一个具有 ResNet50 模型及其权重的新模型,其中顶层模型及其权重位于 resnet 模型(基础模型)之上,然后首先通过卡住来训练它基础模型(ResNet50)和基础模型的最后一层。

关于machine-learning - 验证准确性并没有提高训练 ResNet50,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53032377/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com