gpt4 book ai didi

python - Tensorflow:使用 raw_rnn 复制 dynamic_rnn 行为

转载 作者:太空狗 更新时间:2023-10-30 01:24:51 26 4
gpt4 key购买 nike

我正在尝试使用低级 API tf.nn.raw_rnn 复制 tf.nn.dynamic_rnn 的行为。为此,我使用相同的数据 block 、设置随机种子并使用相同的 hparams 来创建单元格和递归神经网络。但是,两种实现产生的输出并不相同。下面是数据和代码。

数据长度:

X = np.array([[[1.1, 2.2, 3.3], [4.4, 5.5, 6.6], [0.0, 0.0, 0.0]], [[1.1, 2.2, 3.3], [4.4, 5.5, 6.6], [7.7, 8.8, 9.9]], [[1.1, 2.2, 3.3], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]], dtype=np.float32)
X_len = np.array([2, 3, 1], dtype=np.int32)

tf.nn.dynamic_rnn 实现:

tf.reset_default_graph()
tf.set_random_seed(42)

inputs = tf.placeholder(shape=(3, None, 3), dtype=tf.float32)
lengths = tf.placeholder(shape=(None,), dtype=tf.int32)

lstm_cell = tf.nn.rnn_cell.LSTMCell(5)
outputs, state = tf.nn.dynamic_rnn(inputs=inputs, sequence_length=lengths, cell=lstm_cell, dtype=tf.float32, initial_state=lstm_cell.zero_state(3, dtype=tf.float32), time_major=True)
outputs_reshaped = tf.transpose(outputs, perm=[1, 0, 2])

sess = tf.Session()
sess.run(tf.initializers.global_variables())
X = np.transpose(X, (1, 0, 2))
hidden_state = sess.run(outputs_reshaped, feed_dict={inputs: X, lengths: X_len})
print(hidden_state)

tf.nn.raw_rnn 实现:

tf.reset_default_graph()
tf.set_random_seed(42)

inputs = tf.placeholder(shape=(3, None, 3),dtype=tf.float32)
lengths = tf.placeholder(shape=(None,), dtype=tf.int32)

inputs_ta = tf.TensorArray(dtype=tf.float32, size=3)
inputs_ta = inputs_ta.unstack(inputs)

lstm_cell = tf.nn.rnn_cell.LSTMCell(5)

def loop_fn(time, cell_output, cell_state, loop_state):
emit_output = cell_output # == None for time == 0
if cell_output is None: # time == 0
next_cell_state = lstm_cell.zero_state(3, tf.float32)
else:
next_cell_state = cell_state

elements_finished = (time >= lengths)
finished = tf.reduce_all(elements_finished)
next_input = tf.cond(finished, true_fn=lambda: tf.zeros([3, 3], dtype=tf.float32), false_fn=lambda: inputs_ta.read(time))

next_loop_state = None

return (elements_finished, next_input, next_cell_state, emit_output, next_loop_state)

outputs_ta, final_state, _ = tf.nn.raw_rnn(lstm_cell, loop_fn)
outputs_reshaped = tf.transpose(outputs_ta.stack(), perm=[1, 0, 2])

sess = tf.Session()
sess.run(tf.initializers.global_variables())

X = np.transpose(X, (1, 0, 2))
hidden_state = sess.run(outputs_reshaped, feed_dict={inputs: X, lengths: X_len})

print(hidden_state)

我确信它们之间存在一些差异,但我无法弄清楚它在哪里以及是什么。如果有人有想法,那就太棒了。

期待您的回答!

最佳答案

出现差异的原因是您的变量被初始化为不同的值。你可以通过调用看到这个:

print(sess.run(tf.trainable_variables()))

在它们被初始化之后。

造成这种差异的原因是存在全局种子和每个操作种子,因此设置随机种子不会强制调用隐藏在 lstm 代码中的初始化程序以使用相同的随机种子。引用this answer for more details on this .总结一下:用于任何随机的随机种子,从您的全局种子开始,然后取决于添加到图中的最后一个操作的 ID。

知道了这一点,我们可以通过以完全相同的顺序构建图形来强制变量种子在两个实现中相同,直到我们构建变量:这意味着我们从相同的全局种子开始,并添加相同的以相同的顺序对图进行操作直到变量,因此变量将具有相同的操作种子。我们可以这样做:

tf.reset_default_graph()
tf.set_random_seed(42)
lstm_cell = tf.nn.rnn_cell.LSTMCell(5)
inputs_shape = (3, None, 3)
lstm_cell.build(inputs_shape)

build 方法是必需的,因为它实际上是将变量添加到图形中。

这是您所拥有的完整工作版本:

import tensorflow as tf
import numpy as np


X = np.array([[[1.1, 2.2, 3.3], [4.4, 5.5, 6.6], [0.0, 0.0, 0.0]], [[1.1, 2.2, 3.3], [4.4, 5.5, 6.6], [7.7, 8.8, 9.9]], [[1.1, 2.2, 3.3], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]], dtype=np.float32)
X_len = np.array([2, 3, 1], dtype=np.int32)


def dynamic():
tf.reset_default_graph()
tf.set_random_seed(42)
lstm_cell = tf.nn.rnn_cell.LSTMCell(5)
inputs_shape = (3, None, 3)
lstm_cell.build(inputs_shape)

inputs = tf.placeholder(shape=inputs_shape, dtype=tf.float32)
lengths = tf.placeholder(shape=(None,), dtype=tf.int32)

outputs, state = tf.nn.dynamic_rnn(inputs=inputs, sequence_length=lengths, cell=lstm_cell, dtype=tf.float32,
initial_state=lstm_cell.zero_state(3, dtype=tf.float32), time_major=True)
outputs_reshaped = tf.transpose(outputs, perm=[1, 0, 2])

sess = tf.Session()
sess.run(tf.initializers.global_variables())
a = np.transpose(X, (1, 0, 2))
hidden_state = sess.run(outputs_reshaped, feed_dict={inputs: a, lengths: X_len})
print(hidden_state)


def replicated():
tf.reset_default_graph()
tf.set_random_seed(42)
lstm_cell = tf.nn.rnn_cell.LSTMCell(5)
inputs_shape = (3, None, 3)
lstm_cell.build(inputs_shape)

inputs = tf.placeholder(shape=inputs_shape, dtype=tf.float32)
lengths = tf.placeholder(shape=(None,), dtype=tf.int32)

inputs_ta = tf.TensorArray(dtype=tf.float32, size=3)
inputs_ta = inputs_ta.unstack(inputs)


def loop_fn(time, cell_output, cell_state, loop_state):
emit_output = cell_output # == None for time == 0
if cell_output is None: # time == 0
next_cell_state = lstm_cell.zero_state(3, tf.float32)
else:
next_cell_state = cell_state

elements_finished = (time >= lengths)
finished = tf.reduce_all(elements_finished)
next_input = tf.cond(finished, true_fn=lambda: tf.zeros([3, 3], dtype=tf.float32),
false_fn=lambda: inputs_ta.read(time))

next_loop_state = None

return (elements_finished, next_input, next_cell_state, emit_output, next_loop_state)

outputs_ta, final_state, _ = tf.nn.raw_rnn(lstm_cell, loop_fn)
outputs_reshaped = tf.transpose(outputs_ta.stack(), perm=[1, 0, 2])

sess = tf.Session()
sess.run(tf.initializers.global_variables())

a = np.transpose(X, (1, 0, 2))
hidden_state = sess.run(outputs_reshaped, feed_dict={inputs: a, lengths: X_len})

print(hidden_state)


if __name__ == '__main__':
dynamic()
replicated()

关于python - Tensorflow:使用 raw_rnn 复制 dynamic_rnn 行为,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54374956/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com