gpt4 book ai didi

python - 如何在keras中连接不同的张量形状

转载 作者:行者123 更新时间:2023-11-30 09:59:34 25 4
gpt4 key购买 nike

我正在尝试在keras中实现注意力机制,我的context_vector形状是shape=(?, 1024)
我的解码器嵌入形状是 shape=(?, 38, 1024)
context_vector 和解码器_embedding 都是张量,我如何连接它们......?

def B_Attention_layer(state_h,state_c,encoder_outputs):

d0 = tf.keras.layers.Dense(1024,name='dense_layer_1')
d1 = tf.keras.layers.Dense(1024,name='dense_layer_2')
d2 = tf.keras.layers.Dense(1024,name='dense_layer_3')
#below are the hidden states of LSTM
# my encoder output shape is shape=(?, 38, 1024)
#my each hidden state shape is i.e.., state_c shape=(?, 1024) ,state_h shape=(?, 1024)
hidden_with_time_axis_1 = tf.keras.backend.expand_dims(state_h, 1)
hidden_with_time_axis_2 = tf.keras.backend.expand_dims(state_c, 1)
score = d0(tf.keras.activations.tanh(encoder_outputs) + d1(hidden_with_time_axis_1) + d2(hidden_with_time_axis_2))
attention_weights = tf.keras.activations.softmax(score, axis=1)
context_vector = attention_weights * encoder_outputs
context_vector = tf.keras.backend.sum(context_vector, axis=1)
input_to_decoder = tf.keras.layers.Concatenate(axis=-1)([context_vector,decoder_embedding])

return input_to_decoder , attention_weights

当我尝试这个时,我遇到了如下的串联错误

ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 1, 1024), (None, 38, 1024)]

最佳答案

在您的情况下,串联轴为 1,因为您有形状 (None, 1, 1024)(None, 38, 1024)

示例:

I1 = tf.keras.Input(shape=(1, 1024))
# <tf.Tensor 'input_6:0' shape=(?, 1, 1024) dtype=float32>
I2 = tf.keras.Input(shape=(38, 1024))
# <tf.Tensor 'input_7:0' shape=(?, 38, 1024) dtype=float32>

concated = tf.keras.layers.Concatenate(axis=1)([I1,I2])
# output: <tf.Tensor 'concatenate_4/concat:0' shape=(?, 39, 1024) dtype=float32>

此外,由于连接轴为 1,因此错误消息指出其余尺寸必须相同。

关于python - 如何在keras中连接不同的张量形状,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59528527/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com