gpt4 book ai didi

tensorflow - 将嵌入层添加到 lstm 自动编码器时出现错误

转载 作者:行者123 更新时间:2023-12-02 20:31:53 25 4
gpt4 key购买 nike

我有一个运行良好的 seq2seq 模型。我想在这个网络中添加一个嵌入层,但我遇到了错误。

这是我使用预训练词嵌入的架构,运行良好(实际上代码几乎与可用的代码 here 相同,但我想在模型中包含嵌入层而不是使用预训练嵌入向量):

LATENT_SIZE = 20

inputs = Input(shape=(SEQUENCE_LEN, EMBED_SIZE), name="input")

encoded = Bidirectional(LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(inputs)
encoded = Lambda(rev_ent)(encoded)
decoded = RepeatVector(SEQUENCE_LEN, name="repeater")(encoded)
decoded = Bidirectional(LSTM(EMBED_SIZE, return_sequences=True), merge_mode="sum", name="decoder_lstm")(decoded)
autoencoder = Model(inputs, decoded)
autoencoder.compile(optimizer="sgd", loss='mse')
autoencoder.summary()
NUM_EPOCHS = 1

num_train_steps = len(Xtrain) // BATCH_SIZE
num_test_steps = len(Xtest) // BATCH_SIZE

checkpoint = ModelCheckpoint(filepath=os.path.join('Data/', "simple_ae_to_compare"), save_best_only=True)
history = autoencoder.fit_generator(train_gen, steps_per_epoch=num_train_steps, epochs=NUM_EPOCHS, validation_data=test_gen, validation_steps=num_test_steps, callbacks=[checkpoint])

这是摘要:

Layer (type)                 Output Shape              Param #   
=================================================================
input (InputLayer) (None, 45, 50) 0
_________________________________________________________________
encoder_lstm (Bidirectional) (None, 20) 11360
_________________________________________________________________
lambda_1 (Lambda) (512, 20) 0
_________________________________________________________________
repeater (RepeatVector) (512, 45, 20) 0
_________________________________________________________________
decoder_lstm (Bidirectional) (512, 45, 50) 28400

当我更改代码以添加嵌入层时,如下所示:

inputs = Input(shape=(SEQUENCE_LEN,), name="input")

embedding = Embedding(output_dim=EMBED_SIZE, input_dim=VOCAB_SIZE, input_length=SEQUENCE_LEN, trainable=True)(inputs)
encoded = Bidirectional(LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(embedding)

我收到此错误:

expected decoder_lstm to have 3 dimensions, but got array with shape (512, 45)

所以我的问题是,我的模型出了什么问题?

更新

所以,这个错误是在训练阶段出现的。我还检查了输入模型的数据的维度,它是 (61598, 45),显然没有特征数量,或者这里是 Embed_dim

但是为什么解码器部分会出现这个错误呢?因为在编码器部分我已经包含了嵌入层,所以完全没问题。但当它到达解码器部分时,它没有嵌入层,因此无法正确地将其 reshape 为三维。

现在的问题是,为什么在类似的代码中没有发生这种情况?这是我的看法,如有错误请指正。因为Seq2Seq代码通常用于翻译、摘要。在这些代码中,解码器部分也有输入(在翻译情况下,解码器有其他语言输入,因此在解码器部分嵌入的想法是有意义的)。最后,这里我没有单独的输入,这就是为什么我不需要在解码器部分中进行任何单独的嵌入。但是,我不知道如何解决这个问题,我只知道为什么会发生这种情况:|

更新2

这是我输入模型的数据:

   sent_wids = np.zeros((len(parsed_sentences),SEQUENCE_LEN),'int32')
sample_seq_weights = np.zeros((len(parsed_sentences),SEQUENCE_LEN),'float')
for index_sentence in range(len(parsed_sentences)):
temp_sentence = parsed_sentences[index_sentence]
temp_words = nltk.word_tokenize(temp_sentence)
for index_word in range(SEQUENCE_LEN):
if index_word < sent_lens[index_sentence]:
sent_wids[index_sentence,index_word] = lookup_word2id(temp_words[index_word])
else:
sent_wids[index_sentence, index_word] = lookup_word2id('PAD')

def sentence_generator(X,embeddings, batch_size, sample_weights):
while True:
# loop once per epoch
num_recs = X.shape[0]
indices = np.random.permutation(np.arange(num_recs))
# print(embeddings.shape)
num_batches = num_recs // batch_size
for bid in range(num_batches):
sids = indices[bid * batch_size : (bid + 1) * batch_size]
temp_sents = X[sids, :]
Xbatch = embeddings[temp_sents]
weights = sample_weights[sids, :]
yield Xbatch, Xbatch
LATENT_SIZE = 60

train_size = 0.95
split_index = int(math.ceil(len(sent_wids)*train_size))
Xtrain = sent_wids[0:split_index, :]
Xtest = sent_wids[split_index:, :]
train_w = sample_seq_weights[0: split_index, :]
test_w = sample_seq_weights[split_index:, :]
train_gen = sentence_generator(Xtrain, embeddings, BATCH_SIZE,train_w)
test_gen = sentence_generator(Xtest, embeddings , BATCH_SIZE,test_w)

parsed_sentences 是 61598 个已填充的句子。

此外,这是我在模型中作为 Lambda 层的层,我只是在此处添加,以防它产生任何影响:

def rev_entropy(x):
def row_entropy(row):
_, _, count = tf.unique_with_counts(row)
count = tf.cast(count,tf.float32)
prob = count / tf.reduce_sum(count)
prob = tf.cast(prob,tf.float32)
rev = -tf.reduce_sum(prob * tf.log(prob))
return rev

nw = tf.reduce_sum(x,axis=1)
rev = tf.map_fn(row_entropy, x)
rev = tf.where(tf.is_nan(rev), tf.zeros_like(rev), rev)
rev = tf.cast(rev, tf.float32)
max_entropy = tf.log(tf.clip_by_value(nw,2,LATENT_SIZE))
concentration = (max_entropy/(1+rev))
new_x = x * (tf.reshape(concentration, [BATCH_SIZE, 1]))
return new_x

感谢任何帮助:)

最佳答案

我在 Google colab 上尝试了以下示例(TensorFlow 版本 1.13.1),

from tensorflow.python import keras
import numpy as np

SEQUENCE_LEN = 45
LATENT_SIZE = 20
EMBED_SIZE = 50
VOCAB_SIZE = 100

inputs = keras.layers.Input(shape=(SEQUENCE_LEN,), name="input")

embedding = keras.layers.Embedding(output_dim=EMBED_SIZE, input_dim=VOCAB_SIZE, input_length=SEQUENCE_LEN, trainable=True)(inputs)

encoded = keras.layers.Bidirectional(keras.layers.LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(embedding)
decoded = keras.layers.RepeatVector(SEQUENCE_LEN, name="repeater")(encoded)
decoded = keras.layers.Bidirectional(keras.layers.LSTM(EMBED_SIZE, return_sequences=True), merge_mode="sum", name="decoder_lstm")(decoded)
autoencoder = keras.models.Model(inputs, decoded)
autoencoder.compile(optimizer="sgd", loss='mse')
autoencoder.summary()

然后使用一些随机数据训练模型,


x = np.random.randint(0, 90, size=(10, 45))
y = np.random.normal(size=(10, 45, 50))
history = autoencoder.fit(x, y, epochs=NUM_EPOCHS)

这个解决方案效果很好。我觉得问题可能在于您输入标签/输出进行 MSE 计算的方式。

更新

上下文

在最初的问题中,您尝试使用 seq2seq 模型重建词嵌入,其中嵌入是固定的并经过预训练的。然而,如果您想使用可训练的嵌入层作为模型的一部分,那么对这个问题进行建模就变得非常困难。因为您没有固定的目标(即目标会在优化的每次迭代中发生变化,因为您的嵌入层正在发生变化)。此外,这将导致非常不稳定的优化问题,因为目标一直在变化。

修复代码

如果您执行以下操作,您应该能够使代码正常运行。这里embeddings是预先训练的GloVe向量numpy.ndarray

def sentence_generator(X, embeddings, batch_size):
while True:
# loop once per epoch
num_recs = X.shape[0]
embed_size = embeddings.shape[1]
indices = np.random.permutation(np.arange(num_recs))
# print(embeddings.shape)
num_batches = num_recs // batch_size
for bid in range(num_batches):
sids = indices[bid * batch_size : (bid + 1) * batch_size]
# Xbatch is a [batch_size, seq_length] array
Xbatch = X[sids, :]

# Creating the Y targets
Xembed = embeddings[Xbatch.reshape(-1),:]
# Ybatch will be [batch_size, seq_length, embed_size] array
Ybatch = Xembed.reshape(batch_size, -1, embed_size)
yield Xbatch, Ybatch

关于tensorflow - 将嵌入层添加到 lstm 自动编码器时出现错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56433993/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com