gpt4 book ai didi

tensorflow - 具有迁移学习的自定义模型热图

转载 作者:行者123 更新时间:2023-12-05 04:52:18 25 4
gpt4 key购买 nike

在尝试为我的自定义模型获取 Grad-CAM 时,我遇到了一个问题。我正在尝试使用 resnet50 微调图像分类模型。我的模型定义如下:

IMG_SHAPE = (img_height,img_width) + (3,)

base_model = tf.keras.applications.ResNet50(input_shape=IMG_SHAPE, include_top=False, weights='imagenet')

和,

preprocess_input = tf.keras.applications.resnet50.preprocess_input

最后,

input_layer = tf.keras.Input(shape=(img_height, img_width, 3),name="input_layer")
x = preprocess_input(input_layer)
x = base_model(x, training=False)
x = tf.keras.layers.GlobalAveragePooling2D(name="global_average_layer")(x)
x = tf.keras.layers.Dropout(0.2,name="dropout_layer")(x)
x = tf.keras.layers.Dense(4,name="training_layer")(x)
outputs = tf.keras.layers.Dense(4,name="prediction_layer")(x)
model = tf.keras.Model(input_layer, outputs)

现在,我正在学习 https://keras.io/examples/vision/grad_cam/ 上的教程为了获得热图。但是,虽然本教程建议使用 model.summary() 来获取最后的卷积层和分类器层,但我不确定如何为我的模型执行此操作。如果我运行 model.summary(),我得到:

__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_layer (InputLayer) [(None, 224, 224, 3)] 0
__________________________________________________________________________________________________
tf.operators.getitem_11 (None, 224, 224, 3) 0
__________________________________________________________________________________________________
tf.nn.bias_add_11 (TFOpLambd [(None, 224, 224, 3)] 0
__________________________________________________________________________________________________
resnet50 (Functional) (None, 7, 7, 2048) 23587712
__________________________________________________________________________________________________
global_average (GlobalAverag (None, 2048) 0
__________________________________________________________________________________________________
dropout_layer (Dropout) (None, 2048) 0
__________________________________________________________________________________________________
hidden_layer (Dense) (None, 4) 8196
__________________________________________________________________________________________________
predict_layer (Dense) (None, 4) 20
==================================================================================================

但是,如果我运行 base_model.summary(),我会得到:

Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_29 (InputLayer) [(None, 224, 224, 3) 0
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 input_29[0][0]
__________________________________________________________________________________________________
conv1_conv (Conv2D) (None, 112, 112, 64) 9472 conv1_pad[0][0]
__________________________________________________________________________________________________
conv1_bn (BatchNormalization) (None, 112, 112, 64) 256 conv1_conv[0][0]
__________________________________________________________________________________________________
... ... ... ...
__________________________________________________________________________________________________
conv5_block3_3_bn (BatchNormali (None, 7, 7, 2048) 8192 conv5_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_add (Add) (None, 7, 7, 2048) 0 conv5_block2_out[0][0]
conv5_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_out (Activation) (None, 7, 7, 2048) 0 conv5_block3_add[0][0]
==================================================================================================

如果我按照教程使用“resnet50”作为最后一个卷积层,我会收到以下错误:

图表断开连接:无法获取张量 KerasTensor(type_spec=TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='input_29'), name='input_29', description= “由层‘input_29’”创建)在层“conv1_pad”。可以毫无问题地访问以下先前的层:[]

但如果我使用“conv5_block3_out”,程序无法在模型上找到该层。如何访问似乎隐藏在 resnet50 层上的层?

最佳答案

我设法找到了解决这个问题的方法。在定义“make-gradcam_heatmap”时,我添加了这一行

input_layer = model.get_layer('resnet50').get_layer('input_1').input

并将下一行更改为

last_conv_layer = model.get_layer(last_conv_layer_name).get_layer("conv5_block3_out")

关于tensorflow - 具有迁移学习的自定义模型热图,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66571767/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com