gpt4 book ai didi

python - Keras VGG16 微调

转载 作者:太空狗 更新时间:2023-10-29 20:51:08 27 4
gpt4 key购买 nike

keras blog上有一个VGG16微调的例子,但我无法重现它。

更准确地说,这里是用于在没有顶层的情况下初始化 VGG16 并卡住除最顶层以外的所有 block 的代码:

WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'
weights_path = get_file('vgg16_weights.h5', WEIGHTS_PATH_NO_TOP)

model = Sequential()
model.add(InputLayer(input_shape=(150, 150, 3)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block5_maxpool'))

model.load_weights(weights_path)

for layer in model.layers:
layer.trainable = False

for layer in model.layers[-4:]:
layer.trainable = True
print("Layer '%s' is trainable" % layer.name)

接下来,创建一个只有一个隐藏层的顶层模型:

top_model = Sequential()
top_model.add(Flatten(input_shape=model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
top_model.load_weights('top_model.h5')

请注意,它之前曾针对瓶颈特征进行过训练,如博文中所述。接下来,将这个顶层模型添加到基础模型中并编译:

model.add(top_model)
model.compile(loss='binary_crossentropy',
optimizer=SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])

最终,适合猫/狗数据:

batch_size = 16

train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

train_gen = train_datagen.flow_from_directory(
TRAIN_DIR,
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary')

valid_gen = test_datagen.flow_from_directory(
VALID_DIR,
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary')

model.fit_generator(
train_gen,
steps_per_epoch=nb_train_samples // batch_size,
epochs=nb_epoch,
validation_data=valid_gen,
validation_steps=nb_valid_samples // batch_size)

但这是我在尝试拟合时遇到的错误:

ValueError: Error when checking model target: expected block5_maxpool to have 4 > dimensions, but got array with shape (16, 1)

因此,基础模型中的最后一个池化层似乎有问题。或者可能我在尝试将基本模型与顶级模型连接时做错了什么。

有没有人有类似的问题?或者也许有更好的方法来构建这种“级联”模型?我正在使用 keras==2.0.0theano 后端。

Note: I was using examples from gist and applications.VGG16 utility, but has issues trying to concatenate models, I am not too familiar with keras functional API. So this solution I provide here is the most "successful" one, i.e. it fails only on fitting stage.


更新 #1

好的,这里是关于我正在尝试做的事情的一个小解释。首先,我从 VGG16 生成瓶颈特征如下:

def save_bottleneck_features():
datagen = ImageDataGenerator(rescale=1./255)
model = applications.VGG16(include_top=False, weights='imagenet')

generator = datagen.flow_from_directory(
TRAIN_DIR,
target_size=(150, 150),
batch_size=batch_size,
class_mode=None,
shuffle=False)
print("Predicting train samples..")
bottleneck_features_train = model.predict_generator(generator, nb_train_samples)
np.save(open('bottleneck_features_train.npy', 'w'), bottleneck_features_train)

generator = datagen.flow_from_directory(
VALID_DIR,
target_size=(150, 150),
batch_size=batch_size,
class_mode=None,
shuffle=False)
print("Predicting valid samples..")
bottleneck_features_valid = model.predict_generator(generator, nb_valid_samples)
np.save(open('bottleneck_features_valid.npy', 'w'), bottleneck_features_valid)

然后,我创建了一个顶级模型,并按如下方式对这些特征进行训练:

def train_top_model():
train_data = np.load(open('bottleneck_features_train.npy'))
train_labels = np.array([0]*(nb_train_samples / 2) +
[1]*(nb_train_samples / 2))
valid_data = np.load(open('bottleneck_features_valid.npy'))
valid_labels = np.array([0]*(nb_valid_samples / 2) +
[1]*(nb_valid_samples / 2))
model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(train_data, train_labels,
nb_epoch=nb_epoch,
batch_size=batch_size,
validation_data=(valid_data, valid_labels),
verbose=1)
model.save_weights('top_model.h5')

基本上,有两个经过训练的模型,base_model 具有 ImageNet 权重,top_model 具有从瓶颈特征生成的权重。我想知道如何连接它们?有可能还是我做错了什么?因为正如我所见,来自@thomas-pinetz 的响应假设顶级模型没有单独训练并立即附加到模型。不确定我是否清楚,这里引用博客:

In order to perform fine-tuning, all layers should start with properly trained weights: for instance you should not slap a randomly initialized fully-connected network on top of a pre-trained convolutional base. This is because the large gradient updates triggered by the randomly initialized weights would wreck the learned weights in the convolutional base. In our case this is why we first train the top-level classifier, and only then start fine-tuning convolutional weights alongside it.

最佳答案

我认为 vgg 网络描述的权重不适合您的模型,错误源于此。无论如何,如 ( https://keras.io/applications/#vgg16 ) 中所述,有一种更好的方法可以使用网络本身来执行此操作。

你可以只使用:

base_model = keras.applications.vgg16.VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=None)

实例化一个预训练的 vgg 网络。然后你可以卡住图层并使用模型类来实例化你自己的模型,如下所示:

x = base_model.output
x = Flatten()(x)
x = Dense(your_classes, activation='softmax')(x) #minor edit
new_model = Model(input=base_model.input, output=x)

要组合底部网络和顶部网络,您可以使用以下代码片段。使用了以下函数(Input Layer ( https://keras.io/getting-started/functional-api-guide/ )/load_model ( https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model ) 和keras的函数式API):

final_input = Input(shape=(3, 224, 224))
base_model = vgg...
top_model = load_model(weights_file)

x = base_model(final_input)
result = top_model(x)
final_model = Model(input=final_input, output=result)

关于python - Keras VGG16 微调,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43386463/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com