gpt4 book ai didi

python - Keras,在每个时期获得一层的输出

转载 作者:行者123 更新时间:2023-11-28 17:03:11 25 4
gpt4 key购买 nike

我做了什么?

我实现了一个 keras 模型如下:

train_X, test_X, train_Y, test_Y = train_test_split(X, Y, test_size=0.2, random_state=np.random.seed(7), shuffle=True)

train_X = np.reshape(train_X, (train_X.shape[0], 1, train_X.shape[1]))
test_X = np.reshape(test_X, (test_X.shape[0], 1, test_X.shape[1]))

model = Sequential()
model.add(LSTM(100, return_sequences=False, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(train_Y.shape[1], activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])

model.fit(train_X, train_Y, validation_split=.20,
epochs=1000, batch_size=50)

我想要什么?

我想给支持向量机(SVM)倒数第二层(LSTM)的输出,在任何epoch(即1000)到svm 也被训练。

但是我不知道该怎么做?

有什么想法吗?

更新:

我使用 ModelCheckpoint 如下:

model = Sequential()
model.add(LSTM(100, return_sequences=False, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(train_Y.shape[1], activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])

# checkpoint
filepath="weights-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]

model.fit(train_X, train_Y, validation_split=.20,
epochs=1000, batch_size=50, callbacks=callbacks_list, verbose=0)

输出:

Epoch 00991: val_acc did not improve
Epoch 00992: val_acc improved from 0.93465 to 0.93900, saving model to weights-992-0.94.hdf5
Epoch 00993: val_acc did not improve
Epoch 00994: val_acc did not improve
Epoch 00995: val_acc did not improve
Epoch 00996: val_acc did not improve
Epoch 00997: val_acc did not improve
Epoch 00998: val_acc improved from 0.93900 to 0.94543, saving model to weights-998-0.94.hdf5
Epoch 00999: val_acc did not improve

问题:

如@IonicSolutions 所说,如何加载所有这些模型以获得每个 epoch 中 LSTM 层的输出?

最佳答案

最适合您的情况取决于您设置和训练 SVM 的准确程度,但至少有两个使用回调的选项:

您可以使用 ModelCheckpoint callback在每个时期保存您正在训练的模型的副本,然后加载所有这些模型以获得 LSTM 层的输出。

您还可以通过实现 Callback base class 创建自己的回调.在回调中,可以访问模型,您可以使用 on_epoch_end 在每个 epoch 结束时提取 LSTM 输出。

编辑:要方便地访问倒数第二层,您可以执行以下操作:

# Create the model with the functional API
inp = Input((train_X.shape[1], train_X.shape[2],))
lstm = LSTM(100, return_sequences=False)(inp)
dense = Dense(train_Y.shape[1], activation='softmax')(lstm)

# Create the full model
model = Model(inputs=inp, outputs=dense)

# Create the model for access to the LSTM layer
access = Model(inputs=inp, outputs=lstm)

然后,您可以在实例化时将access 传递给您的回调。这里要注意的关键是 modelaccess 共享同一个 LSTM 层,其权重会在训练 model 时发生变化。

关于python - Keras,在每个时期获得一层的输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52961581/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com