gpt4 book ai didi

python - 从 keras 中的预训练模型加载权重进行微调时出现层错误

转载 作者:行者123 更新时间:2023-11-30 09:17:39 24 4
gpt4 key购买 nike

我正在尝试按照教程 Fine-tuning the top layers of a pre-trained network 进行操作。

为此,我想使用预训练的 keras-facenet并将我的分类器添加到顶部。我正在使用vggface作为基础模型。 Facenet 基于 VGGFace。所以这是我运行代码后得到的错误:

ValueError                                Traceback (most recent call last)
<ipython-input-9-261fed5d7ddc> in <module>()
20 model.add(layers.Dense(12, activation='sigmoid'))
21
---> 22 model.load_weights(top_model_weights_path)
23
24

/usr/local/lib/python3.6/dist-packages/keras/models.py in load_weights(self, filepath, by_name, skip_mismatch, reshape)
766 reshape=reshape)
767 else:
--> 768 topology.load_weights_from_hdf5_group(f, layers, reshape=reshape)
769
770 def save_weights(self, filepath, overwrite=True):

/usr/local/lib/python3.6/dist-packages/keras/engine/topology.py in load_weights_from_hdf5_group(f, layers, reshape)
3363 'containing ' + str(len(layer_names)) +
3364 ' layers into a model with ' +
-> 3365 str(len(filtered_layers)) + ' layers.')
3366
3367 # We batch weight value assignments in a single backend call

ValueError: You are trying to load a weight file containing 245 layers into a model with 2 layers.

这是代码:

# path to the model weights files.
weights_path = 'keras-facenet/weights/facenet_keras_weights.h5'
top_model_weights_path = 'keras-facenet/model/facenet_keras.h5'
# dimensions of our images.
img_width, img_height = 224, 224

train_data_dir = 'dataset_cfps/train'
validation_data_dir = 'dataset_cfps/validation'
nb_train_samples = 1774
nb_validation_samples = 313
epochs = 50
batch_size = 16

vggface = VGGFace(model='resnet50', include_top=False, input_shape=(img_width, img_height, 3))

# Create the model
model = models.Sequential()
model.add(layers.Flatten( input_shape=vggface.output_shape[1:]))
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(12, activation='sigmoid'))

model.load_weights(top_model_weights_path)

custom_vgg_model = Model(vggface.input, model(vggface.output))

for layer in custom_vgg_model.layers[:-3]:
layer.trainable = False

custom_vgg_model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])

# prepare data augmentation configuration
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')

custom_vgg_model.summary()

# fine-tune the model
custom_vgg_model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size,
verbose=2)

# Save the model
custom_vgg_model.save('facenet_latest_lr4.h5')

该错误可能是什么问题?预训练模型和分类模型的层数有区别吗?

最佳答案

我以前也遇到过这个错误,尽管是在另一个数据集和架构上。问题是拓扑不同。您可以尝试以下代码:

model.load_weights('filename.h5' , by_name = True, skip_mismatch = True) 

它只会加载与权重数量相对应的层。

文档位于topology.py

关于python - 从 keras 中的预训练模型加载权重进行微调时出现层错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50806566/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com