gpt4 book ai didi

tensorflow - 无法满足明确的设备规范 '/device:GPU:0',因为没有匹配的设备

转载 作者:行者123 更新时间:2023-12-04 13:13:01 25 4
gpt4 key购买 nike

我想在Ubuntu 14.04机器上使用TensorFlow 0.12 for GPU。

但是在将设备分配给节点时,出现以下错误。

InvalidArgumentError (see above for traceback): Cannot assign a device to
node 'my_model/RNN/zeros': Could not satisfy explicit device specification
'/device:GPU:0' because no devices matching that specification are registered
in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
[[Node: my_model/RNN/zeros = Fill[T=DT_FLOAT, _device="/device:GPU:0"]
(my_model/RNN/pack, my_model/RNN/zeros/Const)]]

我的 tensorflow 似乎设置正确,因为这个简单的程序可以工作:
import tensorflow as tf
# Creates a graph.
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))

哪个输出:
    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA 
library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:128]
successfully opened CUDA library libcudnn.so locally I
tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library
libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:128]
successfully opened CUDA library libcuda.so.1 locally I
tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library
libcurand.so locally I tensorflow/core/common_runtime/gpu/gpu_device.cc:885]
Found device 0 with properties: name: Tesla K40m major: 3 minor: 5
memoryClockRate (GHz) 0.745 pciBusID 0000:08:00.0 Total memory: 11.17GiB Free
memory:

11.10GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I
tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y I
tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow
device (/gpu:0) -> (device: 0, name: Tesla K40m, pci bus id: 0000:08:00.0)
Device mapping: /job:localhost/replica:0/task:0/gpu:0 -> device: 0, name:
Tesla K40m, pci bus id: 0000:08:00.0 I tensorflow/core/common_runtime
/direct_session.cc:255] Device mapping: /job:localhost/replica:0/task:0/gpu:0
-> device: 0, name: Tesla K40m, pci bus id: 0000:08:00.0


MatMul: (MatMul): /job:localhost/replica:0/task:0/gpu:0 I tensorflow/core
/common_runtime/simple_placer.cc:827] MatMul: (MatMul)/job:localhost/replica:0
/task:0/gpu:0 b: (Const): /job:localhost/replica:0/task:0/gpu:0 I
tensorflow/core/common_runtime/simple_placer.cc:827] b: (Const)/job:localhost
/replica:0/task:0/gpu:0 a: (Const): /job:localhost/replica:0/task:0/gpu:0 I
tensorflow/core/common_runtime/simple_placer.cc:827] a: (Const)/job:localhost
/replica:0/task:0/gpu:0 [[ 22. 28.] [ 49.
64.]]

如何将设备正确分配给节点?

最佳答案

尝试使用sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True))。如果无法在GPU上进行操作,这将解决该问题。由于某些操作仅具有CPU实现。

当没有GPU实现可用时,使用allow_soft_placement=True将允许TensorFlow退回到CPU。

关于tensorflow - 无法满足明确的设备规范 '/device:GPU:0',因为没有匹配的设备,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44813939/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com