- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我已经能够使用下面的代码实现无状态模型
import os
os.environ['TF_ENABLE_CONTROL_FLOW_V2'] = '1'
import tensorflow as tf
from tensorflow_core.python.keras.models import Model, Sequential
from tensorflow_core.python.keras.layers.core import Dense, Activation, Lambda, Reshape
from tensorflow_core.python.keras.engine.input_layer import Input
from tensorflow_core.python.keras.layers.recurrent import RNN, StackedRNNCells
from tensorflow_core.lite.experimental.examples.lstm.rnn_cell import TFLiteLSTMCell, TfLiteRNNCell
from tensorflow_core.lite.experimental.examples.lstm.rnn import dynamic_rnn
from tensorflow_core.python.ops.rnn_cell_impl import LSTMStateTuple
def buildRNNLayer(inputs, rnn_cells):
"""Build the lstm layer.
Args:
inputs: The input data.
num_layers: How many LSTM layers do we want.
num_units: The unmber of hidden units in the LSTM cell.
"""
rnn_layers = StackedRNNCells(rnn_cells)
# Assume the input is sized as [batch, time, input_size], then we're going
# to transpose to be time-majored.
transposed_inputs = tf.transpose(inputs, perm=[1, 0, 2])
outputs, _ = dynamic_rnn(
rnn_layers,
transposed_inputs,
dtype='float32',
time_major=True)
unstacked_outputs = tf.unstack(outputs, axis=0)
return unstacked_outputs[-1]
def build_rnn_lite(model):
tf.reset_default_graph()
# Construct RNN
cells = []
for layer in range(3):
if model == 'LSTMLite':
cells.append(TFLiteLSTMCell(192, name='lstm{}'.format(layer)))
else:
cells.append(TfLiteRNNCell(192, name='rnn{}'.format(layer)))
spec_input = Input(shape=(5, 64,), name='rnn_in', batch_size=8192)
x = Lambda(buildRNNLayer, arguments={'rnn_cells': cells}, name=model.lower())(spec_input)
out = Dense(64, activation='sigmoid', name='fin_dense')(x)
return Model(inputs=spec_input, outputs=out)
model = build_rnn_lite('LSTMLite')
###### TF LITE CONVERSION
sess = tf.keras.backend.get_session()
input_tensor = sess.graph.get_tensor_by_name('rnn_in:0')
output_tensor = sess.graph.get_tensor_by_name('fin_dense/Sigmoid:0')
converter = tf.lite.TFLiteConverter.from_session(sess, [input_tensor], [output_tensor])
tflite = converter.convert()
print('Model converted successfully!')
这工作正常,我正在尝试创建一个有状态模型,即通过更改下面的代码来将先前的状态与输入一起提供
def buildRNNLayer(inputs, rnn_cells, initial_state=None):
"""Build the lstm layer.
Args:
inputs: The input data.
num_layers: How many LSTM layers do we want.
num_units: The unmber of hidden units in the LSTM cell.
"""
# Assume the input is sized as [batch, time, input_size], then we're going
# to transpose to be time-majored.
transposed_inputs = tf.transpose(inputs, perm=[1, 0, 2])
outputs, new_state = dynamic_rnn(
rnn_cells,
transposed_inputs,
initial_state=initial_state,
dtype='float32',
time_major=True)
unstacked_outputs = tf.unstack(outputs, axis=0)
return unstacked_outputs[-1], new_state
def build_rnn_lite(model, state=False):
tf.reset_default_graph()
# Construct RNN
cells = []
for layer in range(3):
if model == 'LSTMLite':
cells.append(TFLiteLSTMCell(192, name='lstm{}'.format(layer)))
else:
cells.append(TfLiteRNNCell(192, name='rnn{}'.format(layer)))
cells = StackedRNNCells(cells)
state = cells.get_initial_state(batch_size=1, dtype=tf.float32)
if state:
spec_input = Input(shape=(5, 64,), name='rnn_in', batch_size=1)
x, state = Lambda(buildRNNLayer, arguments={'rnn_cells': cells, 'initial_state': state}, name=model.lower())(spec_input)
else:
spec_input = Input(shape=(5, 64,), name='rnn_in')
x, state = Lambda(buildRNNLayer, arguments={'rnn_cells': cells}, name=model.lower())(spec_input)
out = Dense(64, activation='sigmoid', name='fin_dense')(x)
return Model(inputs=spec_input, outputs=[out, state])
model = build_rnn_lite('LSTMLite', True)
in_rnn = np.random.randn(1, 5, 64)
out1 = model.predict(in_rnn)
out2 = model.predict(in_rnn)
###### TF LITE CONVERSION
sess = tf.keras.backend.get_session()
input_tensor = sess.graph.get_tensor_by_name('rnn_in:0')
output_tensor = sess.graph.get_tensor_by_name('fin_dense/Sigmoid:0')
converter = tf.lite.TFLiteConverter.from_session(sess, [input_tensor], [output_tensor])
tflite = converter.convert()
print('Model converted successfully!')
在上面更改的代码中,out1
和 out2
是相同的。如果状态被重用而不是重置,则不应出现这种情况。还需要进行哪些其他更改才能确保输出中的 new_state 用于下一批而不是重置状态?
def get_state_variables(batch_size, cell):
# For each layer, get the initial state and make a variable out of it
# to enable updating its value.
state_variables = []
for state_c, state_h in cell.zero_state(batch_size, tf.float32):
state_variables.append(tf.contrib.rnn.LSTMStateTuple(
tf.Variable(state_c, trainable=False),
tf.Variable(state_h, trainable=False)))
# Return as a tuple, so that it can be fed to dynamic_rnn as an initial state
return tuple(state_variables)
def get_state_update_op(state_variables, new_states):
# Add an operation to update the train states with the last state tensors
update_ops = []
for state_variable, new_state in zip(state_variables, new_states):
# Assign the new state to the state variables on this layer
update_ops.extend([state_variable[0].assign(new_state[0]),
state_variable[1].assign(new_state[1])])
# Return a tuple in order to combine all update_ops into a single operation.
# The tuple's actual value should not be used.
return tf.tuple(update_ops)
def buildMultiCell(cells):
return MultiRNNCell(cells)
def buildRNNLayer(inputs, rnn_cells, initial_state=None):
"""Build the lstm layer.
Args:
inputs: The input data.
num_layers: How many LSTM layers do we want.
num_units: The unmber of hidden units in the LSTM cell.
"""
# Assume the input is sized as [batch, time, input_size], then we're going
# to transpose to be time-majored.
transposed_inputs = tf.transpose(inputs, perm=[1, 0, 2])
outputs, new_state = dynamic_rnn(
rnn_cells,
transposed_inputs,
initial_state=initial_state,
dtype='float32',
time_major=True)
unstacked_outputs = tf.unstack(outputs, axis=0)
update_op = get_state_update_op(initial_state, new_state)
return unstacked_outputs[-1]
def build_rnn_lite(model, state=False):
tf.reset_default_graph()
# Construct RNN
cells = []
for layer in range(3):
if model == 'LSTMLite':
cells.append(TFLiteLSTMCell(192, name='lstm{}'.format(layer)))
else:
cells.append(TfLiteRNNCell(192, name='rnn{}'.format(layer)))
rnn_cells = Lambda(buildMultiCell, name='multicell')(cells)
states = get_state_variables(1, rnn_cells)
if state:
spec_input = Input(shape=(5, 64,), name='rnn_in', batch_size=1)
x = Lambda(buildRNNLayer, arguments={'rnn_cells': rnn_cells, 'initial_state': states}, name=model.lower())(spec_input)
else:
spec_input = Input(shape=(5, 64,), name='rnn_in')
x = Lambda(buildRNNLayer, arguments={'rnn_cells': rnn_cells}, name=model.lower())(spec_input)
out = Dense(64, activation='sigmoid', name='fin_dense')(x)
return Model(inputs=spec_input, outputs=out)
model = build_rnn_lite('LSTMLite', True)
in_rnn = np.random.randn(1, 5, 64)
out1 = model.predict(in_rnn)
out2 = model.predict(in_rnn)
###### TF LITE CONVERSION
sess = tf.keras.backend.get_session()
input_tensor = sess.graph.get_tensor_by_name('rnn_in:0')
output_tensor = sess.graph.get_tensor_by_name('fin_dense/Sigmoid:0')
converter = tf.lite.TFLiteConverter.from_session(sess, [input_tensor], [output_tensor])
tflite = converter.convert()
print('Model converted successfully!')
通过互联网上的其他示例,我能够使另一个版本正常工作,但新状态也没有在此版本中更新。有谁知道如何解决这个问题吗?
最佳答案
我想我可以使用下面的代码解决这个问题
def get_state_variables(batch_size, cell):
# For each layer, get the initial state and make a variable out of it
# to enable updating its value.
state_variables = []
for state_c, state_h in cell.zero_state(batch_size, tf.float32):
state_variables.append(tf.contrib.rnn.LSTMStateTuple(
tf.Variable(state_c, trainable=False),
tf.Variable(state_h, trainable=False)))
# Return as a tuple, so that it can be fed to dynamic_rnn as an initial state
return tuple(state_variables)
def get_state_update_op(state_variables, new_states):
# Add an operation to update the train states with the last state tensors
update_ops = []
for state_variable, new_state in zip(state_variables, new_states):
# Assign the new state to the state variables on this layer
update_ops.extend([state_variable[0].assign(new_state[0]),
state_variable[1].assign(new_state[1])])
# Return a tuple in order to combine all update_ops into a single operation.
# The tuple's actual value should not be used.
return tf.tuple(update_ops)
def buildMultiCell(cells):
return MultiRNNCell(cells)
def buildRNNLayer(inputs, rnn_cells, initial_state=None):
"""Build the lstm layer.
Args:
inputs: The input data.
num_layers: How many LSTM layers do we want.
num_units: The unmber of hidden units in the LSTM cell.
"""
# Assume the input is sized as [batch, time, input_size], then we're going
# to transpose to be time-majored.
transposed_inputs = tf.transpose(inputs, perm=[1, 0, 2])
outputs, new_state = dynamic_rnn(
rnn_cells,
transposed_inputs,
initial_state=initial_state,
dtype='float32',
time_major=True)
unstacked_outputs = tf.unstack(outputs, axis=0)
# update_op = get_state_update_op(initial_state, new_state)
return unstacked_outputs[-1], new_state
def build_rnn_lite(model, state=False):
tf.reset_default_graph()
# Construct RNN
cells = []
for layer in range(3):
if model == 'LSTMLite':
cells.append(TFLiteLSTMCell(192, name='lstm{}'.format(layer)))
else:
cells.append(TfLiteRNNCell(192, name='rnn{}'.format(layer)))
rnn_cells = Lambda(buildMultiCell, name='multicell')(cells)
states = get_state_variables(1, rnn_cells)
if state:
spec_input = Input(shape=(5, 64,), name='rnn_in', batch_size=1)
x, new_states = Lambda(buildRNNLayer, arguments={'rnn_cells': rnn_cells, 'initial_state': states}, name=model.lower())(spec_input)
updated_states = Lambda(get_state_update_op, arguments={'new_states': new_states})(states)
else:
spec_input = Input(shape=(5, 64,), name='rnn_in')
x, new_states = Lambda(buildRNNLayer, arguments={'rnn_cells': rnn_cells}, name=model.lower())(spec_input)
updated_states = Lambda(get_state_update_op, arguments={'new_states': states})(states)
out = Dense(64, activation='sigmoid', name='fin_dense')(x)
return Model(inputs=spec_input, outputs=[out, updated_states])
model = build_rnn_lite('LSTMLite', True)
in_rnn = np.random.randn(1, 5, 64)
out1 = model.predict(in_rnn)
out2 = model.predict(in_rnn)
###### TF LITE CONVERSION
sess = tf.keras.backend.get_session()
input_tensor = sess.graph.get_tensor_by_name('rnn_in:0')
output_tensor = sess.graph.get_tensor_by_name('fin_dense/Sigmoid:0')
converter = tf.lite.TFLiteConverter.from_session(sess, [input_tensor], [output_tensor])
tflite = converter.convert()
print('Model converted successfully!')
上述版本代码中的 updated_states
似乎发生了变化,并且有望得到更新。
关于python - 如何在 TF1.15 中创建有状态 TensorFlowLite RNN 模型,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59962348/
前言: 有时候,一个数据库有多个帐号,包括数据库管理员,开发人员,运维支撑人员等,可能有很多帐号都有比较大的权限,例如DDL操作权限(创建,修改,删除存储过程,创建,修改,删除表等),账户多了,管理
所以我用 Create React App 创建并设置了一个大型 React 应用程序。最近我们开始使用 Storybook 来处理和创建组件。它很棒。但是,当我们尝试运行或构建应用程序时,我们不断遇
遵循我正在创建的控件的代码片段。这个控件用在不同的地方,变量也不同。 我正在尝试编写指令来清理代码,但在 {{}} 附近插入值时出现解析错误。 刚接触 Angular ,无法确定我错过了什么。请帮忙。
我正在尝试创建一个 image/jpeg jax-rs 提供程序类,它为我的基于 post rest 的 Web 服务创建一个图像。我无法制定请求来测试以下内容,最简单的测试方法是什么? @POST
我一直在 Windows 10 的模拟器中练习 c。后来我改用dev C++ IDE。当我在 C 中使用 FILE 时。创建的文件的名称为 test.txt ,而我给出了其他名称。请帮助解决它。 下面
当我们创建自定义 View 时,我们将 View 文件的所有者设置为自定义类,并使用 initWithFrame 或 initWithCode 对其进行实例化。 当我们创建 customUITable
我正在尝试为函数 * Producer 创建一个线程,但用于创建线程的行显示错误。我为这句话加了星标,但我无法弄清楚它出了什么问题...... #include #include #include
今天在做项目时,遇到了需要创建JavaScript对象的情况。所以Bing了一篇老外写的关于3种创建JavaScript对象的文章,看后跟着打了一遍代码。感觉方法挺好的,在这里与大家分享一下。 &
我正在阅读将查询字符串传递给 Amazon 的 S3 以进行身份验证的文档,但似乎无法理解 StringToSign 的创建和使用方式。我正在寻找一个具体示例来说明 (1) 如何构造 String
前言:我对 C# 中任务的底层实现不太了解,只了解它们的用法。为我在下面屠宰的任何东西道歉: 对于“我怎样才能开始一项任务但不等待它?”这个问题,我找不到一个好的答案。在 C# 中。更具体地说,即使任
我有一个由一些复杂的表达式生成的 ILookup。假设这是按姓氏查找人。 (在我们简单的世界模型中,姓氏在家庭中是唯一的) ILookup families; 现在我有两个对如何构建感兴趣的查询。 首
我试图创建一个 MSI,其中包含 和 exe。在 WIX 中使用了捆绑选项。这样做时出错。有人可以帮我解决这个问题。下面是代码: 错误 error LGH
在 Yii 中,Create 和 Update 通常使用相同的形式。因此,如果我在创建期间有电子邮件、密码、...other_fields...等字段,但我不想在更新期间专门显示电子邮件和密码字段,但
上周我一直在努力创建一个给定一行和一列的 QModelIndex。 或者,我会满足于在已经存在的 QModelIndex 中更改 row() 的值。 任何帮助,将不胜感激。 编辑: QModelInd
出于某种原因,这不起作用: const char * str_reset_command = "\r\nReset"; const char * str_config_command = "\r\nC
现在,我有以下由 original.df %.% group_by(Category) %.% tally() %.% arrange(desc(n)) 创建的 data.frame。 DF 5),
在今天之前,我使用/etc/vim/vimrc来配置我的vim设置。今天,我想到了创建.vimrc文件。所以,我用 touch .vimrc cat /etc/vim/vimrc > .vimrc 所
我可以创建一个 MKAnnotation,还是只读的?我有坐标,但我发现使用 setCooperative 手动创建 MKAnnotation 并不容易。 想法? 最佳答案 MKAnnotation
在以下代码中,第一个日志语句按预期显示小数,但第二个日志语句记录 NULL。我做错了什么? NSDictionary *entry = [[NSDictionary alloc] initWithOb
我正在使用与此类似的代码动态添加到数组; $arrayF[$f+1][$y][$x+1] = $value+1; 但是我在错误报告中收到了这个: undefined offset :1 问题:尝试创
我是一名优秀的程序员,十分优秀!