gpt4 book ai didi

python - 如何使用应用于 LSTM 的注意力包装器的输出作为 TimeDistributed 层 Keras 的输入?

转载 作者:行者123 更新时间:2023-12-05 05:18:34 29 4
gpt4 key购买 nike

我一直在尝试对 machinelearningmastery 中显示的 LSTM 模型的输出实现注意力包装器教程:

from numpy import array
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import TimeDistributed
from keras.layers import LSTM
# prepare sequence
length = 5
seq = array([i/float(length) for i in range(length)])
X = seq.reshape(1, length, 1)
y = seq.reshape(1, length, 1)
# define LSTM configuration
n_neurons = length
n_batch = 1
n_epoch = 1000
# create LSTM
model = Sequential()
model.add(LSTM(n_neurons, input_shape=(length, 1), return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='mean_squared_error', optimizer='adam')
print(model.summary())
# train LSTM
model.fit(X, y, epochs=n_epoch, batch_size=n_batch, verbose=2)
# evaluate
result = model.predict(X, batch_size=n_batch, verbose=0)
for value in result[0,:,0]:
print('%.1f' % value)

return_sequences = true 的 LSTM 的输出(samples, steps, features) 被注意力包装器接收(它输出形状(samples, features)) .这是我修改后的代码:

model = Sequential()
model.add(LSTM(n_neurons, input_shape=(length, 1), return_sequences=True))
model.add(Attention())
model.add(TimeDistributed(Dense(1)))

我一直在使用 attention wrapper 描述的 here :

    def dot_product(x, kernel):


if K.backend() == 'tensorflow':
# todo: check that this is correct
return K.squeeze(K.dot(x, K.expand_dims(kernel)), axis=-1)
else:
return K.dot(x, kernel)


class Attention(Layer):
def __init__(self,
W_regularizer=None, b_regularizer=None,
W_constraint=None, b_constraint=None,
bias=True, **kwargs):

self.supports_masking = True
self.init = initializers.get('glorot_uniform')

self.W_regularizer = regularizers.get(W_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)

self.W_constraint = constraints.get(W_constraint)
self.b_constraint = constraints.get(b_constraint)

self.bias = bias
super(Attention, self).__init__(**kwargs)

def build(self, input_shape):
assert len(input_shape) == 3

self.W = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
if self.bias:
self.b = self.add_weight((input_shape[1],),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
else:
self.b = None

self.built = True

def compute_mask(self, input, input_mask=None):
# do not pass the mask to the next layers
return None

def call(self, x, mask=None):
eij = dot_product(x, self.W)

if self.bias:
eij += self.b

eij = K.tanh(eij)

a = K.exp(eij)

# apply mask after the exp. will be re-normalized next
if mask is not None:
# Cast the mask to floatX to avoid float64 upcasting in theano
a *= K.cast(mask, K.floatx())

# in some cases especially in the early stages of training the sum may be almost zero
# and this results in NaN's. A workaround is to add a very small positive number ε to the sum.
# a /= K.cast(K.sum(a, axis=1, keepdims=True), K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())

a = K.expand_dims(a)
weighted_input = x * a
return K.sum(weighted_input, axis=1)

def get_output_shape_for(self, input_shape):
return input_shape[0], input_shape[-1]

但是,我得到的错误是:

ValueError: Input 0 is incompatible with layer time_distributed_1: expected ndim=3, found ndim=2

有没有办法重新调整注意力包装器输出的形状,以便使用 TimeDistributed 层?

最佳答案

使用 K.reshape 或 K.expand_dims 调整 call() 返回的张量的形状。 TD 层需要三维,K.sum 返回二维。您可以尝试 K.expand_dims(a,axis=2) 或 K.reshape(a,shape=(-1,5,1) 假设 a = K.sum(weighted_input, axis=1)。

关于python - 如何使用应用于 LSTM 的注意力包装器的输出作为 TimeDistributed 层 Keras 的输入?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47608273/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com