gpt4 book ai didi

docker - Tensorflow-serving docker 容器添加了 GPU 设备,但 GPU 的利用率为 0%

转载 作者:行者123 更新时间:2023-12-05 06:58:09 24 4
gpt4 key购买 nike

嗨,我遇到了 dockerized TF Serving 看到但没有使用我的 GPU 的问题。

它将 GPU 添加为设备 0,在其上分配内存,然后将 ML 模型加载到 CPU 设备内存中并仅使用 CPU 运行推理。 nvidia-smi 上的 GPU-util 永远不会离开 0%。

谁能帮我弄清楚为什么会这样,应该改变什么?

设置:

操作系统 EC2 g4dn.xlarge 上的亚马逊/深度学习 AMI (Ubuntu 18.04)

GPU:特斯拉 T4

模型: 预训练 gpt2-xl tensorflow from huggingface ,我将其卡住到 SavedModel 中并上传到 S3。

Docker: 配备深度学习 AMI。我已经检查并确认 nvidia-smi 运行容器化,所以这不是 nvidia+docker 问题。

TF Serving:我使用下面的 Dockerfile 来拉取最新的 gpu 图像,并在构建时将模型直接下载到其中:

FROM tensorflow/serving:latest-gpu

RUN apt-get update

ENV TZ=America
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get install -y awscli

ENV AWS_ACCESS_KEY_ID=...
ENV AWS_SECRET_ACCESS_KEY=...

ARG model_name
ENV MODEL_NAME=$model_name

# Use AWS CLI to download the SavedModel into the docker container from S3 bucket
RUN aws s3 cp s3://v3-models/models/pretrained_tf_serving/${MODEL_NAME} /models/${MODEL_NAME} --recursive

EXPOSE 8500

我使用以下命令构建并运行上述 Dockerfile:

#!/bin/bash

# first build the image with the model_name arg, and tag it as xl-serving
docker build -t xl-serving --build-arg model_name=gpt2-xl ../../model_server

# then run it with gpus, exposing gRPC port
docker run -it --rm --gpus all --runtime nvidia -p 8500:8500 xl-serving

运行服务容器打印此输出。请注意已添加 GPU。

2020-11-06 05:25:34.671071: I tensorflow_serving/model_servers/server.cc:87] Building single TensorFlow model file config:  model_name: gpt2-xl model_base_path: /models/gpt2-xl
2020-11-06 05:25:34.671274: I tensorflow_serving/model_servers/server_core.cc:464] Adding/updating models.
2020-11-06 05:25:34.671295: I tensorflow_serving/model_servers/server_core.cc:575] (Re-)adding model: gpt2-xl
2020-11-06 05:25:34.771644: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: gpt2-xl version: 1}
2020-11-06 05:25:34.771673: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: gpt2-xl version: 1}
2020-11-06 05:25:34.771687: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: gpt2-xl version: 1}
2020-11-06 05:25:34.771724: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/gpt2-xl/1
2020-11-06 05:25:35.222512: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2020-11-06 05:25:35.222545: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:234] Reading SavedModel debug info (if present) from: /models/gpt2-xl/1
2020-11-06 05:25:35.222672: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-11-06 05:25:35.223994: I external/org_tensorflow/tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-11-06 05:25:35.262238: I external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-06 05:25:35.263132: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:00:1e.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2020-11-06 05:25:35.263149: I external/org_tensorflow/tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2020-11-06 05:25:35.263236: I external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-06 05:25:35.264122: I external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-06 05:25:35.264948: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-11-06 05:25:36.185140: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-11-06 05:25:36.185165: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-11-06 05:25:36.185171: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-11-06 05:25:36.185334: I external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-06 05:25:36.186222: I external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-06 05:25:36.187046: I external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-11-06 05:25:36.187852: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 13896 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5)
2020-11-06 05:25:37.279837: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:199] Restoring SavedModel bundle.
2020-11-06 05:25:56.154008: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:183] Running initialization op on SavedModel bundle at path: /models/gpt2-xl/1
2020-11-06 05:25:57.551535: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:303] SavedModel load for tags { serve }; Status: success: OK. Took 22777844 microseconds.
2020-11-06 05:25:57.832736: I tensorflow_serving/servables/tensorflow/saved_model_warmup_util.cc:59] No warmup data file found at /models/gpt2-xl/1/assets.extra/tf_serving_warmup_requests
2020-11-06 05:25:57.835030: I tensorflow_serving/core/loader_harness.cc:87] Successfully loaded servable version {name: gpt2-xl version: 1}
2020-11-06 05:25:57.838329: I tensorflow_serving/model_servers/server.cc:367] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
2020-11-06 05:25:57.840415: I tensorflow_serving/model_servers/server.cc:387] Exporting HTTP/REST API at:localhost:8501 ...
[evhttp_server.cc : 238] NET_LOG: Entering the event loop ...

然后我用一个单一的、非批处理的 gRPC 调用访问这个服务器。它将成功运行并返回正确的 GPT2 输出。但是,它需要的时间与 CPU 上的相同设置一样长。 htop 显示 8gb 的 ram(gpt2-xl 型号大小)已加载到 CPU 机器中。然后它显示 TF 服务进程正在运行,并最大化一个或两个 CPU 内核。它似乎只在 CPU 上运行。

这是调用运行时 nvidia-smi 的样子。注意分配的内存和 0% GPU-Util:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 36C P0 26W / 70W | 14240MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 13357 C tensorflow_model_server 14221MiB |
+-----------------------------------------------------------------------------+

我在网上搜索过,找不到任何关于此的建议。我发现最接近的是这个 github 问题:GPU utilization with TF serving #1440 ,修复程序对我不起作用。他们处理的是低 GPU 利用率,我处理的是 0%。

对问题是什么有什么建议吗?

非常感谢。几天来我一直在用头撞墙,所以我非常感谢你的帮助:)

更新#1:

我写了一个 python 脚本(如下)来使用 tensorflow==2.3.0 来加载模型并运行它。它在 CUDA=11.0 的 conda 环境中运行。它在 GPU 上成功运行推理,比我在 TF 服务上获得的快 15 倍。

import tensorflow as tf
import numpy as np

model = tf.saved_model.load('/home/ubuntu/models/gpt2-xl/1/')
servable = model.signatures["forward"]

# Create input tensor
tensor_in = tf.constant([[198, 15667, 6530, 25, 29437, 1706, 1610, 977, 948, 33611]])

# Run a loop of 10 inferences on the model, to predict the next 10 tokens.
for i in range(10):
pred = servable(tensor_in)
logits = pred['output_0']
logits = logits[:, -1, :] / 0.8
next_id = tf.random.categorical(tf.nn.log_softmax(logits, axis=-1), num_samples=1)
next_id = tf.dtypes.cast(next_id, tf.int32).numpy()
tensor_in = np.concatenate((tensor_in, next_id), axis=1)

接下来:将尝试在容器外运行 tf-serving。即将更新...

最佳答案

你是如何保存你的模型的?保存模型时添加 clear_devices=True 并重试。

关于docker - Tensorflow-serving docker 容器添加了 GPU 设备,但 GPU 的利用率为 0%,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64709679/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com