gpt4 book ai didi

python-3.x - 在 TensorFlow 中使用多个 GPU 来推断 pb 模型

转载 作者:行者123 更新时间:2023-12-04 12:28:26 25 4
gpt4 key购买 nike

我使用带有 8 个 Titan X 的服务器,试图比使用单个 GPU 更快地预测图像。
我像这样加载 PB 模型:

model_dir = "./model"
model = "nasnet_large_v1.pb"
model_path = os.path.join(model_dir, model)
model_graph = tf.Graph()
with model_graph.as_default():
with tf.gfile.GFile(model_path, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
input_layer = model_graph.get_tensor_by_name("input:0")
output_layer = model_graph.get_tensor_by_name('final_layer/predictions:0')

然后我开始迭代 ./data_input 中的文件目录如下:
with tf.Session(graph = model_graph, config=config) as inference_session:
# Initialize session
initializer = np.zeros([1, 331, 331, 3])
print("Initialing session...")
inference_session.run(output_layer, feed_dict={input_layer: initializer})
print("Done initialing.")

# Prediction
file_list = []
processed_files = []

for path, dir, files in os.walk('./model_output/processed_files'):
for file in files:
processed_files.append(file.split('_')[0]+'.tfrecord')

print("Processed files: ")
for f in processed_files:
print('\t', f)

while True:
for path, dir, files in os.walk("./data_input"):
for file in files:
if file == '.DS_Store': continue
if file in processed_files: continue
print("Reading file {}".format(file))
file_path = os.path.join('./data_input', file)
file_list.append(file_path)
res = predict(file_path)
processed_files.append(file)

with open('./model_output/processed_files/{}_{}_processed_files.json'.format(file.split('.')[0], model.split('.')[0]), 'w') as f:
f.write(json.dumps(processed_files))

with open('./model_output/classify_result/{}_{}_classify_result.json'.format(file.split('.')[0], model.split('.')[0]), 'w') as f:
f.write(json.dumps(res, indent=4, separators=(',',':')))

time.sleep(1)

predict()函数,我写了这样的代码:
label_map = get_label()
# read tfrecord file by tf.data
dataset = get_dataset(filename)
# dataset.apply(tf.contrib.data.prefetch_to_device("/gpu:0"))
# load data
iterator = dataset.make_one_shot_iterator()
features = iterator.get_next()

result = []
content = {}
count = 0
# session
with tf.Session() as sess:
tf.global_variables_initializer()
t1 = time.time()
try:
while True:
[_image, _label, _filepath] = sess.run(fetches=features)
_image = np.asarray([_image])
_image = _image.reshape(-1, 331, 331, 3)

predictions = inference_session.run(output_layer, feed_dict={input_layer: _image})
predictions = np.squeeze(predictions)

# res = []
for i, pred in enumerate(predictions):
count += 1
overall_result = np.argmax(pred)
predict_result = label_map[overall_result].split(":")[-1]

if predict_result == 'unknown': continue

content['prob'] = str(np.max(pred))
content['label'] = predict_result
content['filepath'] = str(_filepath[i], encoding='utf-8')
result.append(content)

except tf.errors.OutOfRangeError:
t2 = time.time()
print("{} images processed, average time: {}s".format(count, (t2-t1)/count))
return result

我尝试使用 with tf.device('/gpu:{}'.format(i))在加载模型部分或推理 session 部分或 session 部分, nvidia-smi显示只有 GPU0 被 100% 使用,而其他 GPU 似乎即使在加载内存时也无法工作。

我应该怎么做才能让所有 GPU 同时运行以提高预测速度?

我的代码在 https://github.com/tzattack/image_classification_algorithms 下.

最佳答案

您可以通过以下方式强制图中每个节点的设备:

def load_network(graph, i):
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(graph, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
for node in od_graph_def.node:
node.device = '/gpu:{}'.format(i) if i >= 0 else '/cpu:0'
return {"od_graph_def": od_graph_def}

然后您可以将获得的多个图形(每个 gpu)合并为一个
如果您对所有 gpu 使用相同的模型,也可以更改张量名称
并在一个 sessoin 中运行所有

非常适合我

关于python-3.x - 在 TensorFlow 中使用多个 GPU 来推断 pb 模型,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54247097/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com