gpt4 book ai didi

python - 有状态 RNN 在 Keras 功能模型中张量形状错误

转载 作者:行者123 更新时间:2023-12-01 09:15:47 30 4
gpt4 key购买 nike

我定义了一个 Keras 功能模型,其中包含一个带有状态 LSTM 的 block ,如下所示:

import numpy as np
from tensorflow.python import keras


data = np.ones((1,2,3))

input_shape = data.shape # batch size, step size, input size
output_units = 2

### input block ###
inputs = keras.layers.Input(batch_shape=input_shape, name="inputs")


### model block with stateful RNN ###
dummy_inputs_1 = keras.layers.Input(batch_shape=input_shape, name="dummy_inputs_1")
recurrent_1 = keras.layers.LSTM(units=input_shape[-1], batch_input_shape=input_shape,
return_sequences=True, stateful=True,
name="recurrent_1")(dummy_inputs_1)

dense_1 = keras.layers.Dense(output_units, batch_input_shape=(
input_shape[0], input_shape[-1], input_shape[1]),
name="dense_1")
output_1 = keras.layers.TimeDistributed(dense_1, input_shape=input_shape, name="output_1")(recurrent_1)

model_1 = keras.models.Model(inputs=[dummy_inputs_1], outputs=[output_1], name="model_1")
model_1.compile(loss='mean_squared_error',
optimizer='Nadam',
metrics=['accuracy'])

model_1.predict(data) #works

### add model block to model ###
model_block = model_1(inputs)
model = keras.models.Model(inputs=[inputs], outputs=[model_block], name="model")
model.compile(loss='mean_squared_error',
optimizer='Nadam',
metrics=['accuracy'])

model_1.predict(data) #works

model.predict(data) #fails

正如所写,第一个 predict() 调用(对包含有状态 LSTM 层的内部模型 block )工作正常,但第二个调用失败并出现以下错误:

 Traceback (most recent call last):
File ".../functional_stateful.py", line 38, in <module>
model_1.predict(data)
File ".../local/lib/python2.7/site-packages/tensorflow/python/keras/engine/training.py", line 1478, in predict
self, x, batch_size=batch_size, verbose=verbose, steps=steps)
File ".../local/lib/python2.7/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 363, in predict_loop
batch_outs = f(ins_batch)
File ".../local/lib/python2.7/site-packages/tensorflow/python/keras/backend.py", line 2897, in __call__
fetched = self._callable_fn(*array_vals)
File ".../local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1454, in __call__
self._session._session, self._handle, args, status, None)
File ".../local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 519, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'inputs' with dtype float and shape [1,2,3]
[[Node: inputs = Placeholder[dtype=DT_FLOAT, shape=[1,2,3], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

在 LSTM 定义中注释掉 stateful=True 后,整个过程运行良好。有谁知道发生了什么事吗?

编辑:显然,仅调用另一层上的有状态模型 block 就足以导致该 block 的 predict() 失败(即此代码因相同的错误而失败):

import numpy as np
from tensorflow.python import keras

data = np.ones((1,2,3))

input_shape = data.shape # batch size, step size, input size
output_units = 2

### input block ###
inputs = keras.layers.Input(batch_shape=input_shape, name="inputs")


### sample model block with stateful RNN ###
dummy_inputs_1 = keras.layers.Input(batch_shape=input_shape, name="dummy_inputs_1")
recurrent_1 = keras.layers.LSTM(units=input_shape[-1], batch_input_shape=input_shape,
return_sequences=True, stateful=True,
name="recurrent_1")(dummy_inputs_1)

model_1 = keras.models.Model(inputs=[dummy_inputs_1], outputs=[recurrent_1], name="model_1")
model_1.compile(loss='mean_squared_error',
optimizer='Nadam',
metrics=['accuracy'])

# ### add model block to model ###
model_block = model_1(inputs)

model_1.predict(data) #fails

编辑 2:但显然,在另一个 block 上调用有状态 block 之前添加对 Predict() 的调用可以让您在之后仍然使用它(即下面的代码运行良好):

import numpy as np
from tensorflow.python import keras

data = np.ones((1,2,3))

input_shape = data.shape # batch size, step size, input size
output_units = 2

### input block ###
inputs = keras.layers.Input(batch_shape=input_shape, name="inputs")


### sample model block with stateful RNN ###
dummy_inputs_1 = keras.layers.Input(batch_shape=input_shape, name="dummy_inputs_1")
recurrent_1 = keras.layers.LSTM(units=input_shape[-1], batch_input_shape=input_shape,
return_sequences=True, stateful=True,
name="recurrent_1")(dummy_inputs_1)

model_1 = keras.models.Model(inputs=[dummy_inputs_1], outputs=[recurrent_1], name="model_1")
model_1.compile(loss='mean_squared_error',
optimizer='Nadam',
metrics=['accuracy'])

model_1.predict(data) #works

# ### add model block to model ###
model_block = model_1(inputs)

model_1.predict(data) #works

最佳答案

我怀疑 stateful=True RNN 与多个输入不兼容。
(在您的代码中,您有 dummy_inputs_1inputs。keras 在许多消息中将其称为“多个入站节点”。实际上,您有两个并行分支,一个关于原始 dummy_inputs_1,另一个关于新 inputs)

这是为什么呢? Stateful=True 层旨在接收“一个序列”(或一批中的许多“并行”序列),该序列被分成时间步长组。

当它收到批处理 2 时,它会将其解释为批处理 1 关于序列时间步长的续集。

当你有两个输入张量时,RNN 应该如何解释什么继续什么?你会失去“连续序列”的一致性。该层只有“一个状态张量”,它无法用它来跟踪并行张量。

因此,如果您要使用具有多个输入的有状态 RNN,我建议您创建该层的副本。如果您希望它们共享相同的权重,这可能需要具有公共(public)权重张量的自定义层。

现在,如果您打算使用此 block 一次,您可能应该使用 model_1.inputmodel_1.output 而不是提供另一个输入张量。

关于python - 有状态 RNN 在 Keras 功能模型中张量形状错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51268504/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com