- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
我正在尝试构建一个具有两个损失函数的神经网络,它们像加权和一样组合在一起。第一个简单地计算 mean square error密集层和给定标签的线性输出,但另一个大量使用嵌套 tf.map_fn 。有与 tf.layers.batch_normalization() 一起使用的批标准化层,因此我必须将这些行添加到优化目标中:
with tf.name_scope("Optimizer"):
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
adam = tf.train.AdamOptimizer()
self.train_op = adam.minimize(self.total_loss)
但是我收到错误:
AttributeError: 'NoneType' object has no attribute 'op'
它来自minimize()
方法。如果我删除控制依赖项,则不会出现错误。另外,如果我删除依赖于循环的第二个优化目标,则不会出现错误。我已经测试了前向传播中的第二个损失函数,它工作得很好。
有什么想法可以跟踪问题吗?完整错误日志:
Traceback (most recent call last):
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3267, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-6d5efdb6d091>", line 1, in <module>
runfile('/home/mtarasov/PycharmProjects/ML/src/utils/model.py', wdir='/home/mtarasov/PycharmProjects/ML/src/utils')
File "/home/mtarasov/Installations/pycharm-2018.2.4/helpers/pydev/_pydev_bundle/pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/home/mtarasov/Installations/pycharm-2018.2.4/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/mtarasov/PycharmProjects/ML/src/utils/model.py", line 168, in <module>
model = Model().build()
File "/home/mtarasov/PycharmProjects/ML/src/utils/model.py", line 60, in build
self.train_op = adam.minimize(self.total_loss)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 400, in minimize
grad_loss=grad_loss)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 514, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 596, in gradients
gate_gradients, aggregation_method, stop_gradients)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 663, in _GradientsHelper
to_ops, from_ops, colocate_gradients_with_ops, func_graphs, xs)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 190, in _PendingCount
between_op_list, between_ops, colocate_gradients_with_ops)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1432, in MaybeCreateControlFlowState
loop_state.AddWhileContext(op, between_op_list, between_ops)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1244, in AddWhileContext
grad_state = GradLoopState(forward_ctxt, outer_grad_state)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 846, in __init__
real_cnt, outer_grad_state)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2585, in AddBackpropLoopCounter
name="b_count")
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 248, in _Enter
data, frame_name, is_constant, parallel_iterations, name=name)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_control_flow_ops.py", line 178, in enter
parallel_iterations=parallel_iterations, name=name)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 454, in new_func
return func(*args, **kwargs)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3155, in create_op
op_def=op_def)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1746, in __init__
self._control_flow_post_processing()
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1757, in _control_flow_post_processing
self._control_flow_context.AddOp(self)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2438, in AddOp
self._AddOpInternal(op)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2475, in _AddOpInternal
for x in external_inputs if x.outputs]
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2475, in <listcomp>
for x in external_inputs if x.outputs]
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 80, in identity
return gen_array_ops.identity(input, name=name)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3264, in identity
"Identity", input=input, name=name)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 454, in new_func
return func(*args, **kwargs)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3155, in create_op
op_def=op_def)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1746, in __init__
self._control_flow_post_processing()
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1757, in _control_flow_post_processing
self._control_flow_context.AddOp(self)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2438, in AddOp
self._AddOpInternal(op)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2466, in _AddOpInternal
self._MaybeAddControlDependency(op)
File "/home/mtarasov/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2504, in _MaybeAddControlDependency
op._add_control_input(self.GetControlPivot().op)
AttributeError: 'NoneType' object has no attribute 'op
最佳答案
补充 mcstarioni 的答案。正如所指出的,用 tf.keras.layers.BatchNormalization 替换批量归一化层似乎可以消除错误。但是,这是因为 keras 中的 BatchNormalization 不会在 UPDATE_OPS
中添加批量标准化参数,如所述 here ,因为它使用不同的训练方式。如果您检查移动均值和方差,您会发现它们在训练期间不会仅通过运行 train_op
进行更新。 除了 train_op
之外,运行 layer.update
也很重要,这应该可以解决问题。
或者,如果可能的话,尝试删除嵌套的 map_fn
。
关于python - 使用tensorflow tf.control_dependency 和 tf.layers.batch_normalization 出现错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53503889/
我对 tensorflow 中的 tf.layers.batch_normalization 感到困惑。 我的代码如下: def my_net(x, num_classes, phase_train,
我最近开始使用 Tensorflow,并一直在尽力适应环境。这真是太棒了!然而,使用 tf.contrib.layers.batch_norm 进行批量归一化有点棘手。现在,这是我正在使用的函数: d
我在 MNIST 数据上使用 Tensorflow 的官方批量归一化 (BN) 函数 ( tf.contrib.layers.batch_norm() )。我使用以下代码添加 BN: local4_b
我正在训练以下模型: with slim.arg_scope(inception_arg_scope(is_training=True)): logits_v, endpoints_v = i
对于我的实现,我必须先定义权重,并且不能在 TensorFlow 中使用高级函数,如 tf.layers.batch_normalization 或 tf.layers.dense。所以要进行批量归一
我已经在 tensorflow 中实现了某种神经网络(GAN:生成对抗网络)。 它按预期工作,直到我决定在 generator(z) 方法中添加以下批归一化层(参见下面的完整代码): out = tf
我需要在 while 循环体中添加一个 batch_normalization 层,但当我训练网络时它会崩溃。如果我删除x = tf.layers.batch_normalization(x,trai
我使用 TensorFlow 训练 DNN。我了解到 Batch Normalization 对 DNN 非常有帮助,所以我在 DNN 中使用了它。 我使用“tf.layers.batch_norma
tf.layers.batch_normalization 中“可训练”和“训练”标志的意义是什么?这两者在训练和预测过程中有何不同? 最佳答案 批量归一化有两个阶段: 1. Training:
我目前正在实现一个模型,我需要在测试期间更改运行平均值和标准偏差。因此,我假设 nn.functional.batch_norm将是比 nn.BatchNorm2d 更好的选择 但是,我有成批的图像作
我正在尝试构建一个具有两个损失函数的神经网络,它们像加权和一样组合在一起。第一个简单地计算 mean square error密集层和给定标签的线性输出,但另一个大量使用嵌套 tf.map_fn 。有
我正在将 TensorFlow 代码迁移到 TensorFlow 2.1.0。 原代码如下: conv = tf.layers.conv2d(inputs, out_channels, kernel_
我尝试在 Mnist 数据集上使用函数 tf.contrib.layers.batch_norm 实现 CNN。 当我训练和检查模型时,我发现损失正在减少(很好!),但测试数据集的准确性仍然是随机的(
我正在尝试使用 tensorflow 给出的归一化层。在那function ,有一个字段指定我们是使用 beta 还是 gamma 值。 center: If True, subtract beta.
我有一个 Keras 函数模型(具有卷积层的神经网络),它可以很好地与 tensorflow 配合使用。我可以运行它,我可以适应它。 但是,使用tensorflow gpu时无法建立模型。 这是构建模
以下代码(复制/粘贴可运行)说明了如何使用 tf.layers.batch_normalization。 import tensorflow as tf bn = tf.layers.batch_no
我是一名优秀的程序员,十分优秀!