- c - 在位数组中找到第一个零
- linux - Unix 显示有关匹配两种模式之一的文件的信息
- 正则表达式替换多个文件
- linux - 隐藏来自 xtrace 的命令
我正在从源代码安装 tensorflow (documentation) .
Cuda驱动版本:
nvcc: NVIDIA (R) Cuda compiler driver
Cuda compilation tools, release 7.5, V7.5.17
当我运行以下命令时:
bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu
它给了我以下错误:
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:925] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:118] Found device 0 with properties:
name: GeForce GT 640
major: 3 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:05:00.0
Total memory: 2.00GiB
Free memory: 1.98GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:138] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:148] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:843] Ignoring gpu device (device: 0, name: GeForce GT 640, pci bus id: 0000:05:00.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:843] Ignoring gpu device (device: 0, name: GeForce GT 640, pci bus id: 0000:05:00.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:843] Ignoring gpu device (device: 0, name: GeForce GT 640, pci bus id: 0000:05:00.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:843] Ignoring gpu device (device: 0, name: GeForce GT 640, pci bus id: 0000:05:00.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:843] Ignoring gpu device (device: 0, name: GeForce GT 640, pci bus id: 0000:05:00.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:843] Ignoring gpu device (device: 0, name: GeForce GT 640, pci bus id: 0000:05:00.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:843] Ignoring gpu device (device: 0, name: GeForce GT 640, pci bus id: 0000:05:00.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:843] Ignoring gpu device (device: 0, name: GeForce GT 640, pci bus id: 0000:05:00.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:843] Ignoring gpu device (device: 0, name: GeForce GT 640, pci bus id: 0000:05:00.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:843] Ignoring gpu device (device: 0, name: GeForce GT 640, pci bus id: 0000:05:00.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
F tensorflow/cc/tutorials/example_trainer.cc:128] Check failed: ::tensorflow::Status::OK() == (session->Run({{"x", x}}, {"y:0", "y_normalized:0"}, {}, &outputs)) (OK vs. Invalid argument: Cannot assign a device to node 'Cast': Could not satisfy explicit device specification '/gpu:0' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
[[Node: Cast = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, _device="/gpu:0"](Const)]])
F tensorflow/cc/tutorials/example_trainer.cc:128] Check failed: ::tensorflow::Status::OK() == (session->Run({{"x", x}}, {"y:0", "y_normalized:0"}, {}, &outputs)) (OK vs. Invalid argument: Cannot assign a device to node 'Cast': Could not satisfy explicit device specification '/gpu:0' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
[[Node: Cast = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, _device="/gpu:0"](Const)]])
F tensorflow/cc/tutorials/example_trainer.cc:128] Check failed: ::tensorflow::Status::OK() == (session->Run({{"x", x}}, {"y:0", "y_normalized:0"}, {}, &outputs)) (OK vs. Invalid argument: Cannot assign a device to node 'Cast': Could not satisfy explicit device specification '/gpu:0' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
[[Node: Cast = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, _device="/gpu:0"](Const)]])
F tensorflow/cc/tutorials/example_trainer.cc:128] Check failed: ::tensorflow::Status::OK() == (session->Run({{"x", x}}, {"y:0", "y_normalized:0"}, {}, &outputs)) (OK vs. Invalid argument: Cannot assign a device to node 'Cast': Could not satisfy explicit device specification '/gpu:0' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
[[Node: Cast = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, _device="/gpu:0"](Const)]])
Aborted (core dumped)
我需要不同的 GPU 来运行它吗?
最佳答案
我已经安装了 Tensorflow 修订版 1.8。它推荐CUDA 9.0。我正在使用具有 CUDA 计算能力 3.0 的 GTX 650M 卡,现在可以正常工作了。操作系统是 ubuntu 18.04。以下是详细步骤:
我已经为我的 opencv 3.4 编译包含了 ffmpeg 和一些相关包,如果不需要请不要安装运行以下命令:
sudo apt-get update
sudo apt-get dist-upgrade -y
sudo apt-get autoremove -y
sudo apt-get upgrade
sudo add-apt-repository ppa:jonathonf/ffmpeg-3 -y
sudo apt-get update
sudo apt-get install build-essential -y
sudo apt-get install ffmpeg -y
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev -y
sudo apt-get install python-dev libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev -y
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev -y
sudo apt-get install libxvidcore-dev libx264-dev -y
sudo apt-get install unzip qtbase5-dev python-dev python3-dev python-numpy python3-numpy -y
sudo apt-get install libopencv-dev libgtk-3-dev libdc1394-22 libdc1394-22-dev libjpeg-dev libpng12-dev libtiff5-dev >libjasper-dev -y
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libxine2-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev -y
sudo apt-get install libv4l-dev libtbb-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev -y
sudo apt-get install libvorbis-dev libxvidcore-dev v4l-utils vtk6 -y
sudo apt-get install liblapacke-dev libopenblas-dev libgdal-dev checkinstall -y
sudo apt-get install libgtk-3-dev -y
sudo apt-get install libatlas-base-dev gfortran -y
sudo apt-get install qt-sdk -y
sudo apt-get install python2.7-dev python3.5-dev python-tk -y
sudo apt-get install cython libgflags-dev -y
sudo apt-get install tesseract-ocr -y
sudo apt-get install tesseract-ocr-eng -y
sudo apt-get install tesseract-ocr-ell -y
sudo apt-get install gstreamer1.0-python3-plugin-loader -y
sudo apt-get install libdc1394-22-dev -y
sudo apt-get install openjdk-8-jdk
sudo apt-get install pkg-config zip g++-6 gcc-6 zlib1g-dev unzip git
sudo wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
sudo pip install -U pip
sudo pip install -U numpy
sudo pip install -U pandas
sudo pip install -U wheel
sudo pip install -U six
运行以下命令:
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-get install nvidia-390 -y
重新启动并运行以下命令,它应该会为您提供如下图所示的详细信息:
CUDA 9.0 需要 gcc-6 和 g++-6,运行以下命令:
cd /usr/bin
sudo rm -rf gcc gcc-ar gcc-nm gcc-ranlib g++
sudo ln -s gcc-6 gcc
sudo ln -s gcc-ar-6 gcc-ar
sudo ln -s gcc-nm-6 gcc-nm
sudo ln -s gcc-ranlib-6 gcc-ranlib
sudo ln -s g++-6 g++
转到 https://developer.nvidia.com/cuda-90-download-archive .选择选项:Linux->x86_64->Ubuntu->17.04->deb(local)。下载主文件和两个补丁。运行以下命令:
sudo dpkg -i cuda-repo-ubuntu1704-9-0-local_9.0.176-1_amd64.deb
sudo apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda
在您的PC上导航到第一个补丁并双击它,它会自动执行,第二个补丁也是如此。
将以下内容添加到您的 ~/.bashrc 文件中并重新启动它:
export PATH=/usr/local/cuda-9.0/bin${PATH:+:$PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
从 https://developer.nvidia.com/cudnn 下载 tar 文件并将其解压缩到您的下载文件夹下载需要nvidia开发的登录,免费注册运行以下命令:
cd ~/Downloads/cudnn-9.0-linux-x64-v7.1/cuda
sudo cp include/* /usr/local/cuda/include/
sudo cp lib64/libcudnn.so.7.1.4 lib64/libcudnn_static.a /usr/local/cuda/lib64/
cd /usr/lib/x86_64-linux-gnu
sudo ln -s libcudnn.so.7.1.4 libcudnn.so.7
sudo ln -s libcudnn.so.7 libcudnn.so
从 https://developer.nvidia.com/nccl 下载 tar 文件并将其解压缩到您的下载文件夹下载需要nvidia开发的登录,免费注册运行以下命令:
sudo mkdir -p /usr/local/cuda/nccl/lib /usr/local/cuda/nccl/include
cd ~/Downloads/nccl-repo-ubuntu1604-2.2.12-ga-cuda9.0_1-1_amd64/
sudo cp *.txt /usr/local/cuda/nccl
sudo cp include/*.h /usr/include/
sudo cp lib/libnccl.so.2.1.15 lib/libnccl_static.a /usr/lib/x86_64-linux-gnu/
sudo ln -s /usr/include/nccl.h /usr/local/cuda/nccl/include/nccl.h
cd /usr/lib/x86_64-linux-gnu
sudo ln -s libnccl.so.2.1.15 libnccl.so.2
sudo ln -s libnccl.so.2 libnccl.so
for i in libnccl*; do sudo ln -s /usr/lib/x86_64-linux-gnu/$i /usr/local/cuda/nccl/lib/$i; done
从 https://github.com/bazelbuild/bazel/releases 下载“bazel-0.13.1-installer-darwin-x86_64.sh”运行以下命令:
chmod +x bazel-0.13.1-installer-darwin-x86_64.sh
./bazel-0.13.1-installer-darwin-x86_64.sh --user
export PATH="$PATH:$HOME/bin"
我们将使用 CUDA、XLA JIT(哦,是的)和 jemalloc 作为 malloc 支持进行编译。所以我们对这些事情输入是。运行以下命令并按照运行配置中的描述回答查询
git clone https://github.com/tensorflow/tensorflow
git checkout r1.8
./configure
You have bazel 0.13.0 installed.
Please specify the location of python. [Default is /usr/bin/python]:
Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages]
Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: y
jemalloc as malloc support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n
No Google Cloud Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: n
No Hadoop File System support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n
No Amazon S3 File System support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n
No Apache Kafka Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with XLA JIT support? [y/N]: y
XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with GDR support? [y/N]: n
No GDR support will be enabled for TensorFlow.
Do you wish to build TensorFlow with VERBS support? [y/N]: n
No VERBS support will be enabled for TensorFlow.
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]:
Please specify the location where CUDA 9.1 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1.4
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Do you wish to build TensorFlow with TensorRT support? [y/N]: n
No TensorRT support will be enabled for TensorFlow.
Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]: 2.2.12
Please specify the location where NCCL 2 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:/usr/local/cuda/nccl
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.0]
Do you want to use clang as CUDA compiler? [y/N]: n
nvcc will be used as CUDA compiler.
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/x86_64-linux-gnu-gcc-7]: /usr/bin/gcc-6
Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
Configuration finished
现在要编译 tensorflow,运行下面的命令,这非常耗费 RAM 并且需要时间。如果您有大量 RAM,您可以从下面的行中删除“--local_resources 2048,.5,1.0”,否则这将适用于 2 GB 的 RAM
bazel build --config=opt --config=cuda --local_resources 2048,.5,1.0 //tensorflow/tools/pip_package:build_pip_package
构建wheel文件,运行如下:
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
使用pip安装生成的wheel文件
sudo pip install /tmp/tensorflow_pkg/tensorflow*.whl
现在可以运行tensorflow在设备上探索,下图是ipython终端上的展示
关于python - Tensorflow:Cuda 计算能力 3.0。最低要求的 Cuda 功能是 3.5,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39023581/
This question already has answers here: Closed 11 years ago. Duplicate: Recommended website resoluti
我有一个网络应用程序需要 IE9。其他浏览器(Firefox、Chrome、Opera 等)的等效浏览器版本是什么? 我知道如何检查用户当前的浏览器/版本,如果不支持,我需要看看是否可以为用户提供下载
我在比较 Mysql 数据库中的两个值并显示最低值时遇到问题。 比如我有这个: value1 = 23.4 value2 = 4.479 我试过这个: ORDER BY CAST(column AS
我有一个需要使用的功能,但我需要将我的最小 SDK 设置为 23 才能使用它。问题是,我们的应用程序运行在很多低端 SDK 设备上。有什么方法可以设置我的项目以允许我在编译应用程序的同时仍然使用较低的
我需要在32位数字中获得一个1位数字,其中只有一个1位(总是)。用C ++或asm最快的方法。 例如 input: 0x00000001, 0x10000000 output:
我已经对我的数据进行了分组。现在,我要做的是每周从“高”列中选择最高值,并从“低”列中选择最低值,然后使用最高值减去最低值得到范围。但是代码总是错误的。有人对我有想法吗? 这是我的 DataFrame
如何编写一个存储 20 个数字的数组,然后显示以下数据:数组中最小的数字、最大的数字、数字的总和以及它们的平均值? 最佳答案 使用java编程语言 int myArray[] = {15,25,85,
在学习 C 的同时做一些实验,我遇到了一些奇怪的事情。这是我的程序: int main(void) {sleep(5);} 当它被编译时,可执行文件的文件大小为 8496 字节(与 26 字节的源代码
我已经创建了我想在我的项目中使用的包。在包中,我使用的是 UIStackView。当我将包添加到项目并运行它时,我收到错误 'UIStackView' 仅适用于 iOS 9.0 或更高版本。如果我的项
我已经制定了一个程序来显示给定日期的特定时间的最高和最低流行项目。该过程没有错误或异常,并且一切正常。如您所见,为了显示 Items 的第一条记录,查询重复了两次,但唯一的区别在于顺序(ASC 和 D
我是 BPEL/BPMN 新手。 是否可以在没有 WS-* Web 服务和 EJB 容器(例如 jBoss、WebLogic、Glassfish)的情况下编写 BPEL/BPMN 感知软件? 我想知道
我们正在使用 OpenGL 4.3。但是,我们担心我们使用的功能适用于我们的显卡,但不符合 OpenGL 4.3 的“最低”要求规范。 是否有可能模拟最低限度的行为?例如,让显卡拒绝任何非标准纹理格式
我正在我的应用程序中实现 Facebook SDK。 按照 facebook 开发人员指南,除了我没有设置 GIT,所以我下载了 SDK,并将其导入 Eclipse,将 Java 合规级别更改为 1.
所以我构建了一个使用 API 15 中特定默认配色方案的应用程序。但是,我知道大多数设备仍在运行 API 10。为了吸引这些设备,我在我的设备上切换了最小 SDK设备到 10。这样做之后,我注意到我的
数据 所以,假设我有一个应用程序,我正在测试汽车的速度、性能、安全性等。我有一组以下格式的数据: CAR TABLE ID CAR_NAME 1 Ford Focus 2006 2 To
我有以下查询: SELECT AVG(q1) AS q1, AVG (q2) AS q2, AVG(q3) AS q3, AVG(q4) AS q4, AVG(q5) AS q5 FROM tresu
我刚刚创建了我的第一个 Android 应用程序(第 10 次)。我创建的项目的 sdk 比我预期的要高,现在我想降低它。我最初的 android maifest 不包含 部分,所以我补充说: 我还
我的小组作业是制作一个程序,允许用户输入任意数量的数字,然后程序会告诉你输入的最高数字、输入的最低数字、平均值、输入的总数和平均值。我们必须使用菜单。 我们已经写好了菜单。我们的大部分计算代码都在案例
我正在尝试设置一个显示文档目录中数据的 UITableView。 我对代码有点迷茫,因为我尝试了来自 Google 和论坛等的许多示例。 我正在创建没有 Storyboard 的应用程序,所以它全部在
我们都知道将最后 1 位设置为 0 的技巧 n&(n-1)。例如,0110 & (0110-1) = 0100。但是反过来呢?将最后一个 0 设置为 1?所以 0110 变成 0111? 我在 sta
我是一名优秀的程序员,十分优秀!