gpt4 book ai didi

machine-learning - 使用功能 API 重写顺序模型

转载 作者:行者123 更新时间:2023-11-30 09:49:25 25 4
gpt4 key购买 nike

我正在尝试重写 Network In Network CNN 的顺序模型使用函数式 API。我将它与 CIFAR-10 数据集一起使用。顺序模型训练没有问题,但功能 API 模型会卡住。我在重写模型时可能错过了一些东西。

这是一个可重现的示例:

依赖关系:

from keras.models import Model, Input, Sequential
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Dropout, Activation
from keras.utils import to_categorical
from keras.losses import categorical_crossentropy
from keras.optimizers import Adam
from keras.datasets import cifar10

加载数据集:

(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train = x_train / 255.
x_test = x_test / 255.
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
input_shape = x_train[0,:,:,:].shape

这是工作的顺序模型:

model = Sequential()

#mlpconv block1
model.add(Conv2D(32, (5, 5), activation='relu',padding='valid',input_shape=input_shape))
model.add(Conv2D(32, (1, 1), activation='relu'))
model.add(Conv2D(32, (1, 1), activation='relu'))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(0.5))

#mlpconv block2
model.add(Conv2D(64, (3, 3), activation='relu',padding='valid'))
model.add(Conv2D(64, (1, 1), activation='relu'))
model.add(Conv2D(64, (1, 1), activation='relu'))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(0.5))

#mlpconv block3
model.add(Conv2D(128, (3, 3), activation='relu',padding='valid'))
model.add(Conv2D(32, (1, 1), activation='relu'))
model.add(Conv2D(10, (1, 1), activation='relu'))
model.add(GlobalAveragePooling2D())

model.add(Activation('softmax'))

编译和训练:

model.compile(loss=categorical_crossentropy, optimizer=Adam(), metrics=['acc']) 

_ = model.fit(x=x_train, y=y_train, batch_size=32,
epochs=200, verbose=1,validation_split=0.2)

在三个时期内,模型的验证准确率接近 50%。

这是使用功能 API 重写的同一模型:

model_input = Input(shape=input_shape)

#mlpconv block1
x = Conv2D(32, (5, 5), activation='relu',padding='valid')(model_input)
x = Conv2D(32, (1, 1), activation='relu')(x)
x = Conv2D(32, (1, 1), activation='relu')(x)
x = MaxPooling2D((2,2))(x)
x = Dropout(0.5)(x)

#mlpconv block2
x = Conv2D(64, (3, 3), activation='relu',padding='valid')(x)
x = Conv2D(64, (1, 1), activation='relu')(x)
x = Conv2D(64, (1, 1), activation='relu')(x)
x = MaxPooling2D((2,2))(x)
x = Dropout(0.5)(x)

#mlpconv block3
x = Conv2D(128, (3, 3), activation='relu',padding='valid')(x)
x = Conv2D(32, (1, 1), activation='relu')(x)
x = Conv2D(10, (1, 1), activation='relu')(x)

x = GlobalAveragePooling2D()(x)
x = Activation(activation='softmax')(x)

model = Model(model_input, x, name='nin_cnn')

然后使用与顺序模型相同的参数编译该模型。训练时,训练准确度停留在 0.10,这意味着模型不会变得更好,并且会随机选择 10 个类别之一。

重写模型时我错过了什么?调用 model.summary() 时,除了功能 API 模型中的显式 Input 层之外,模型看起来相同。

最佳答案

删除最终转换层中的activation可以解决问题:

x = Conv2D(10, (1, 1))(x)

仍然不确定为什么顺序模型在该层的激活中工作得很好。

关于machine-learning - 使用功能 API 重写顺序模型,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47735201/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com