gpt4 book ai didi

python - 在 GPU 上训练的模型上设置设备并在 CPU 上进行预测

转载 作者:行者123 更新时间:2023-11-30 22:12:12 25 4
gpt4 key购买 nike

我在 GPU 上训练了一个模型并像这样保存它(export_path 是我的输出目录)

builder = tf.saved_model.builder.SavedModelBuilder(export_path)

tensor_info_x = tf.saved_model.utils.build_tensor_info(self.Xph)
tensor_info_y = tf.saved_model.utils.build_tensor_info(self.predprob)
tensor_info_it = tf.saved_model.utils.build_tensor_info(self.istraining)
tensor_info_do = tf.saved_model.utils.build_tensor_info(self.dropout)

prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'myx': tensor_info_x, 'istraining': tensor_info_it, 'dropout': tensor_info_do},
outputs={'ypred': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))

builder.add_meta_graph_and_variables(
net, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature },)
builder.save()

现在我正在尝试加载它并运行预测。如果我在 GPU 上,它工作得很好,但如果周围没有 GPU,我会得到:

tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation 'rnn/while/rnn/multi_rnn_cell/cell_0/cell_0/layer_norm_basic_lstm_cell/dropout/add/Enter': Operation was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device.

现在我读到了有关 tf.train.import_meta_graph 和clear_device 选项的信息,但我无法完成这项工作。我正在加载我的模型,如下所示:

predict_fn = predictor.from_saved_model(modelname)

此时抛出上述错误。 modelname 是 pb 文件的完整文件名。有没有办法遍历图表的节点并手动设置设备(或执行类似的操作)?我正在使用 tensorflow 1.8.0

我看到了Can a model trained on gpu used on cpu for inference and vice versa?我不认为我在复制。这个问题的不同之处在于我想知道训练后要做什么

最佳答案

我最终在 GPU 计算机上使用“clear_devices=True”重新保存模型,然后将保存的模型移至仅使用 CPU 的计算机。我找不到任何具体的解决方案,所以我在下面发布我的脚本:

import tensorflow as tf

with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, tf.saved_model.tag_constants.SERVING], m)
loaded_graph = tf.get_default_graph()
x = loaded_graph.get_tensor_by_name('myx:0')
dropout = loaded_graph.get_tensor_by_name('mydropout:0')
y = loaded_graph.get_tensor_by_name('myy:0')
export_path = 'somedirectory'
builder = tf.saved_model.builder.SavedModelBuilder(export_path + '/mymodel')
tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
tensor_info_do = tf.saved_model.utils.build_tensor_info(dropout)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'myx': tensor_info_x, 'mydropout': tensor_info_do},
outputs={'myy': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))

builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature }, clear_devices=True)
builder.save()

关于python - 在 GPU 上训练的模型上设置设备并在 CPU 上进行预测,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51182162/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com