- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我已经构建了一个编码器-解码器的 Seq2Seq 模型。我想给它添加一个注意力层。我尝试添加注意层 through this但它没有帮助。
这是我没有注意的初始代码
# Encoder
encoder_inputs = Input(shape=(None,))
enc_emb = Embedding(num_encoder_tokens, latent_dim, mask_zero = True)(encoder_inputs)
encoder_lstm = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder_lstm(enc_emb)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
dec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)
dec_emb = dec_emb_layer(decoder_inputs)
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(dec_emb,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.summary()
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
dec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)
dec_emb = dec_emb_layer(decoder_inputs)
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
attention = dot([decoder_lstm, encoder_lstm], axes=[2, 2])
attention = Activation('softmax')(attention)
context = dot([attention, encoder_lstm], axes=[2,1])
decoder_combined_context = concatenate([context, decoder_lstm])
decoder_outputs, _, _ = decoder_combined_context(dec_emb,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.summary()
Layer dot_1 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.recurrent.LSTM'>. Full input: [<keras.layers.recurrent.LSTM object at 0x7f8f77e2f3c8>, <keras.layers.recurrent.LSTM object at 0x7f8f770beb70>]. All inputs to the layer should be tensors.
最佳答案
点积需要在张量输出上计算......在编码器中你正确定义了编码器输出,在解码器中你必须添加 decoder_outputs, state_h, state_c = decoder_lstm(enc_emb, initial_state=encoder_states)
现在的点积是
attention = dot([decoder_outputs, encoder_outputs], axes=[2, 2])
attention = Activation('softmax')(attention)
context = dot([attention, encoder_outputs], axes=[2,1])
串联不需要initial_states。你必须在你的 rnn 层中定义它:
decoder_outputs, state_h, state_c = decoder_lstm(enc_emb, initial_state=encoder_states)
这里是完整的例子
# dummy variables
num_encoder_tokens = 30
num_decoder_tokens = 10
latent_dim = 100
encoder_inputs = Input(shape=(None,))
enc_emb = Embedding(num_encoder_tokens, latent_dim, mask_zero = True)(encoder_inputs)
encoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
encoder_outputs, state_h, state_c = encoder_lstm(enc_emb)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
dec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)
dec_emb = dec_emb_layer(decoder_inputs)
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(dec_emb,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.summary()
注意的解码器
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
dec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)
dec_emb = dec_emb_layer(decoder_inputs)
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, state_h, state_c = decoder_lstm(dec_emb, initial_state=encoder_states)
attention = dot([decoder_outputs, encoder_outputs], axes=[2, 2])
attention = Activation('softmax')(attention)
context = dot([attention, encoder_outputs], axes=[2,1])
decoder_outputs = concatenate([context, decoder_outputs])
decoder_dense = Dense(num_decoder_tokens, activation='softmax')(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_dense)
model.summary()
关于python-3.x - 将注意力层添加到 Seq2Seq 模型,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62357239/
我是 F# 的新手,目前想知道如何将序列的字节序列转换为序列的浮点序列 seq -> seq 所以我有以下字节序列 let colourList = seq[ seq[10uy;20uy;30uy];
我想在一个序列中聚合兼容的元素,即转换 Seq[T]成Seq[Seq[T]]其中每个子序列中的元素彼此兼容,同时保留原始 seq 顺序,例如从 case class X(i: Int, n: Int)
以下函数files返回seq> 。如何让它返回seq相反? type R = { .... } let files = seqOfStrs |> Seq.choose(fun s -> mat
我正在尝试转换如下所示的数据: val inputData = Seq(("STUDY1", "Follow-up", 1), ("STUDY1", "Off Study", 2),
稍微简化一下,我的问题来自字符串列表 input我想用函数解析 parse返回 Either[String,Int] . 然后list.map(parse)返回 Either 的列表s。程序的下一步是
如标题中所述,我不明白为什么这些函数无法编译并要求 Seq。 def f1[V a + b } error: type mismatch; found : Seq[Int] required:
我有一个类型为 Flow[T, Seq[Seq[String]], NotUsed] 的流。 我想以示例流的方式将其展平 ev1: Seq(Seq("a", "b"), Seq("n", "m") e
我对 Scala 比较陌生,但我想我理解它的类型系统和并行集合,但我无法理解这个错误: 我有一个函数 def myFun(a : Seq[MyType], b : OtherType) : Seq[M
在学习 F# 时,我正在做一个小挑战: Enter a string and the program counts the number of vowels in the text. For adde
------------------------- clojure.core/seq ([coll]) Returns a seq on the collection. If the collec
我担心不知道什么时候可以使用 "Seq", "seq"。你能告诉我有哪些不同之处吗? 这是我的代码。为什么不使用“seq”? let s = ResizeArray() s.Add(1.1) s
我试图返回一个带有直到循环的可变序列,但我有一个不可变的序列作为 (0 until nbGenomes) 的返回: def generateRandomGenome(nbGenomes:Int):
将 Seq(Seq) 分配到多个类型化数组而不先将 Seq 分配给标量的正确语法是什么? Seq 是否会以某种方式变平?这失败了: class A { has Int $.r } my A (@ra1
我正在尝试训练 序列到序列 一个简单的正弦波模型。目标是获得Nin数据点和预测 Nout下一个数据点。任务看起来很简单,模型对大频率的预测很好 freq (y = sin(freq * x))。例如,
我正在努力重构一些使用 Seq 的 Node.js 代码,以及文档和 this answer ,我知道我使用 this() 转到下一个 .seq(),但是如何将变量传递给下一个 .seq( )的功能?
我有一个像这样的字符串序列(文件中的行) [20150101] error a details 1 details 2 [20150101] error b details [20150101] er
给定两个序列 a 和 b,声明如下: var a = @[1, 2, 3] b = @[4, 5, 6] a = b 会创建一个新的 seq 将所有内容从 b 复制到 a 还是重用 a?我有特
type Suit = Spades | Clubs | Hearts | Diamonds type Rank = Ace | Two | Three | Four | Five | Six | S
慢慢地掌握列表匹配和尾递归的窍门,我需要一个函数将列表“缝合”在一起,去掉中间值(更容易显示而不是解释): 合并 [[1;2;3];[3;4;5];[5;6;7]]//-> [1;2;3;4;5;6;
为什么这段代码不起作用? type Test() = static member func (a: seq) = 5. let a = [[4.]] Test.func(a) 它给出以下错误: T
我是一名优秀的程序员,十分优秀!