gpt4 book ai didi

TensorFlow:Dst 张量未初始化

转载 作者:行者123 更新时间:2023-12-03 08:58:23 26 4
gpt4 key购买 nike

MNIST For ML Beginners当我运行时,教程给我一个错误 print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) .其他一切都运行良好。

错误和跟踪:

InternalErrorTraceback (most recent call last)
<ipython-input-16-219711f7d235> in <module>()
----> 1 print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
338 try:
339 result = self._run(None, fetches, feed_dict, options_ptr,
--> 340 run_metadata_ptr)
341 if run_metadata:
342 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
562 try:
563 results = self._do_run(handle, target_list, unique_fetches,
--> 564 feed_dict_string, options, run_metadata)
565 finally:
566 # The movers are no longer used. Delete them.

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
635 if handle is None:
636 return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
--> 637 target_list, options, run_metadata)
638 else:
639 return self._do_call(_prun_fn, self._session, handle, feed_dict,

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_call(self, fn, *args)
657 # pylint: disable=protected-access
658 raise errors._make_specific_exception(node_def, op, error_message,
--> 659 e.code)
660 # pylint: enable=protected-access
661

InternalError: Dst tensor is not initialized.
[[Node: _recv_Placeholder_3_0/_1007 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_312__recv_Placeholder_3_0", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
[[Node: Mean_1/_1011 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_319_Mean_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

我刚刚切换到更新版本的 CUDA,所以这可能与此有关?似乎这个错误是关于将张量复制到 GPU。

堆栈:EC2 g2.8xlarge 机器,Ubuntu 14.04

更新:
print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys}))运行良好。这让我怀疑问题在于我试图将一个巨大的张量传输到 GPU 并且它无法接受。像 minibatch 这样的小张量工作得很好。

更新 2:

我已经弄清楚了导致这个问题的张量到底有多大:

batch_size = 7509 #Works.
print(sess.run(accuracy, feed_dict={x: mnist.test.images[0:batch_size], y_: mnist.test.labels[0:batch_size]}))

batch_size = 7510 #Doesn't work. Gets the Dst error.
print(sess.run(accuracy, feed_dict={x: mnist.test.images[0:batch_size], y_: mnist.test.labels[0:batch_size]}))

最佳答案

为简洁起见,当没有足够的内存来处理批处理大小时会生成此错误消息。

扩展在 Steven的链接(我还不能发表评论),这里有一些在 Tensorflow 中监控/控制内存使用的技巧:

  • 要在运行期间监视内存使用情况,请考虑记录运行元数据。然后,您可以在 Tensorboard 的图表中查看每个节点的内存使用情况。见 Tensorboard information page有关更多信息和示例。
  • 默认情况下,Tensorflow 会尝试分配尽可能多的 GPU 内存。您可以使用 GPUConfig 选项更改此设置,以便 Tensorflow 仅根据需要分配尽可能多的内存。见 documentation在这一点上。在那里,您还可以找到一个选项,该选项允许您仅分配 GPU 内存的一部分(不过我发现有时会损坏。)。
  • 关于TensorFlow:Dst 张量未初始化,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37313818/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com