gpt4 book ai didi

python - 在 Keras 中将 LSTM 与具有不同张量维度的 CNN 连接起来

转载 作者:行者123 更新时间:2023-12-05 02:15:19 26 4
gpt4 key购买 nike

这是我尝试使用连接操作合并的两个神经元网络。网络应按 1-好电影和 0-坏电影对 IMDB 电影评论进行分类

def cnn_lstm_merged():
embedding_vecor_length = 32
cnn_model = Sequential()
cnn_model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
cnn_model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
cnn_model.add(MaxPooling1D(pool_size=2))
cnn_model.add(Flatten())

lstm_model = Sequential()
lstm_model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
lstm_model.add(LSTM(64, activation = 'relu'))
lstm_model.add(Flatten())

merge = concatenate([lstm_model, cnn_model])
hidden = (Dense(1, activation = 'sigmoid'))(merge)
#print(model.summary())
output = hidden.fit(X_train, y_train, epochs=3, batch_size=64)
return output

但是当我运行代码时出现错误:

  File "/home/pythonist/Desktop/EnsemblingLSTM_CONV/train.py", line 59, in cnn_lstm_merged
lstm_model.add(Flatten())
File "/home/pythonist/deeplearningenv/lib/python3.6/site-packages/keras/engine/sequential.py", line 185, in add
output_tensor = layer(self.outputs[0])
File "/home/pythonist/deeplearningenv/lib/python3.6/site-packages/keras/engine/base_layer.py", line 414, in __call__
self.assert_input_compatibility(inputs)
File "/home/pythonist/deeplearningenv/lib/python3.6/site-packages/keras/engine/base_layer.py", line 327, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer flatten_2: expected min_ndim=3, found ndim=2
[Finished in 4.8s with exit code 1]

如何合并这两层?谢谢

最佳答案

不需要在 LSTM 之后使用 Flatten,因为 LSTM(默认情况下)只返回 last 状态而不是序列,即数据将具有 (BS, n_output) 形状,但 Flatten 层期望形状为 (BS, a , b) 将转化为 (BS, a*b).

因此,要么删除 Flatten 层并只处理最后一个状态,要么将 return_sequences=True 添加到 LSTM。这将使 LSTM 返回所有输出,而不仅仅是最后一个输出,即 (BS, T, n_out)

编辑:此外,您创建最终模型的方式是错误的。请看this例子;对你来说,它应该是这样的:

  merge = Concatenate([lstm_model, cnn_model])
hidden = Dense(1, activation = 'sigmoid')
conc_model = Sequential()
conc_model.add(merge)
conc_model.add(hidden)
conc_model.compile(...)

output = conc_model .fit(X_train, y_train, epochs=3, batch_size=64)

总而言之,使用 Functional API 可能会更好.

编辑 2:这是最终代码

cnn_model = Sequential()
cnn_model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
cnn_model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
cnn_model.add(MaxPooling1D(pool_size=2))
cnn_model.add(Flatten())

lstm_model = Sequential()
lstm_model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
lstm_model.add(LSTM(64, activation = 'relu', return_sequences=True))
lstm_model.add(Flatten())

# instead of the last two lines you can also use
# lstm_model.add(LSTM(64, activation = 'relu'))
# then you do not have to use the Flatten layer. depends on your actual needs

merge = Concatenate([lstm_model, cnn_model])
hidden = Dense(1, activation = 'sigmoid')
conc_model = Sequential()
conc_model.add(merge)
conc_model.add(hidden)

关于python - 在 Keras 中将 LSTM 与具有不同张量维度的 CNN 连接起来,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52155642/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com