gpt4 book ai didi

python - Keras Unet + VGG16 预测都一样

转载 作者:太空宇宙 更新时间:2023-11-04 11:20:29 26 4
gpt4 key购买 nike

我正在 Keras 中使用 VGG16(解码器部分)训练 U-Net。该模型训练良好并且正在学习 - 我看到验证集的 gradua tol 改进。

但是,当我尝试对图像调用 predict 时,我收到所有值都相同的矩阵。

模型如下:

class Gray2VGGInput(Layer):
"""Custom conversion layer"""
def build(self, x):
self.image_mean = K.variable(value=np.array([103.939, 116.779, 123.68]).reshape([1,1,1,3]).astype('float32'),
dtype='float32',
name='imageNet_mean' )
self.built = True
return
def call(self, x):
rgb_x = K.concatenate([x,x,x], axis=-1 )
norm_x = rgb_x - self.image_mean
return norm_x

def compute_output_shape(self, input_shape):
return input_shape[:3] + (3,)


def UNET1_VGG16(img_rows=864, img_cols=1232):
'''
UNET with pretrained layers from VGG16
'''
def upsampleLayer(in_layer, concat_layer, input_size):
'''
Upsampling (=Decoder) layer building block
Parameters
----------
in_layer: input layer
concat_layer: layer with which to concatenate
input_size: input size fot convolution
'''
upsample = Conv2DTranspose(input_size, (2, 2), strides=(2, 2), padding='same')(in_layer)
upsample = concatenate([upsample, concat_layer])
conv = Conv2D(input_size, (1, 1), activation='relu', kernel_initializer='he_normal', padding='same')(upsample)
conv = BatchNormalization()(conv)
conv = Dropout(0.2)(conv)
conv = Conv2D(input_size, (1, 1), activation='relu', kernel_initializer='he_normal', padding='same')(conv)
conv = BatchNormalization()(conv)
return conv

#--------
#INPUT
#--------
#batch, height, width, channels
inputs_1 = Input((img_rows, img_cols, 1))

#-----------------------
#INPUT CONVERTER & VGG16
#-----------------------
inputs_3 = Gray2VGGInput(name='gray_to_rgb')(inputs_1) #shape=(img_rows, img_cols, 3)
base_VGG16 = VGG16(include_top=False, weights='imagenet', input_tensor=inputs_3)

#--------
#DECODER
#--------
c1 = base_VGG16.get_layer("block1_conv2").output #(None, 864, 1232, 64)
c2 = base_VGG16.get_layer("block2_conv2").output #(None, 432, 616, 128)
c3 = base_VGG16.get_layer("block3_conv2").output #(None, 216, 308, 256)
c4 = base_VGG16.get_layer("block4_conv2").output #(None, 108, 154, 512)

#--------
#BOTTLENECK
#--------
c5 = base_VGG16.get_layer("block5_conv2").output #(None, 54, 77, 512)

#--------
#ENCODER
#--------
c6 = upsampleLayer(in_layer=c5, concat_layer=c4, input_size=512)
c7 = upsampleLayer(in_layer=c6, concat_layer=c3, input_size=256)
c8 = upsampleLayer(in_layer=c7, concat_layer=c2, input_size=128)
c9 = upsampleLayer(in_layer=c8, concat_layer=c1, input_size=64)

#--------
#DENSE OUTPUT
#--------
outputs = Conv2D(1, (1, 1), activation='sigmoid')(c9)

model = Model(inputs=inputs_1, outputs=outputs)

#Freeze layers
for layer in model.layers[:16]:
layer.trainable = False

print(model.summary())

model.compile(optimizer='adam',
loss=fr.diceCoefLoss,
metrics=[fr.diceCoef])

return model

然后,我加载模型并调用 predict:

model = un.UNET1_VGG16()

pth_to_model = PTH_OUTPUT + 'weights__L_01.h5'
model.load_weights(pth_to_model)

preds = model.predict(X_image_test, verbose=1)

但是,结果如下所示:

[[0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]
[0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]
[0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]
...
[0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]
[0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]
[0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]]

我对没有 VGG16 的其他型号使用相同的程序,一切正常。因此,我假设与 VGG16 相关的内容是错误的。也许是输入层,我正在将其转换为“假”RGB 图像?

最佳答案

问题出在 VGG 卡住层上。如果您的数据集与 ImageNet 完全不同,也许您应该端到端地训练整个模型。此外,显然,如果您卡住 BatchNormalization 层,它们的行为可能会很奇怪。有关引用,请参阅此 discussion .

关于python - Keras Unet + VGG16 预测都一样,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56101737/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com