gpt4 book ai didi

python - 在 tensorflow-lite C++ 中输入图像

转载 作者:行者123 更新时间:2023-12-04 08:36:17 25 4
gpt4 key购买 nike

我正在尝试使用 C++ 将 Python+Keras 模型移动到 Tensorflow Lite 以用于嵌入式平台。
我不知道如何正确地将图像数据传递给解释器。
我有以下工作 python 代码:

interpreter = tf.lite.Interpreter(model_path="model.tflite")
print(interpreter.get_input_details())
print(interpreter.get_output_details())
print(interpreter.get_tensor_details())
interpreter.allocate_tensors()

input_details = interpreter.get_input_details()
input_shape = input_details[0]['shape']
print("Input Shape ")
print(input_shape)

image_a = plt.imread('image/0_0_0_copy.jpeg')
image_a = cv2.resize(image_a,(224,224))
image_a = np.asarray(image_a)/255
image_a = np.reshape(image_a,(1,224,224,3))

input_data = np.array(image_a, dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()

output_data = interpreter.get_tensor(output_details[0]['index'])
print("Output Data ")
print(output_data)
图像的输入形状是 (1, 224, 224, 3)。
我需要相同的等效 C++ 代码。我如何翻译这个?
到目前为止,我有以下 C++ 代码:
int main(){

std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile('model.tflite');

if(!model){
printf("Failed to map model\n");
exit(0);
}

tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder builder(*model, resolver);
std::unique_ptr<tflite::Interpreter> interpreter;

if(!interpreter){
printf("Failed to construct interpreter\n");
exit(0);
}

tflite::PrintInterpreterState(interpreter.get());

interpreter->AllocateTensors();
interpreter->SetNumThreads(4);
interpreter->SetAllowFp16PrecisionForFp32(true);

if(interpreter->AllocateTensors() != kTfLiteOk){
printf("Failed to allocate tensors\n")
}
LOG(INFO) << "tensors size: " << interpreter->tensors_size() << "\n";
LOG(INFO) << "nodes size: " << interpreter->nodes_size() << "\n";
LOG(INFO) << "inputs: " << interpreter->inputs().size() << "\n";
LOG(INFO) << "input(0) name: " << interpreter->GetInputName(0) << "\n";

float* input = interpreter->typed_input_tensor<float>(0);
// Need help here

interpreter->Invoke();

float* output = interpreter->typed_output_tensor<float>(0);

printf("output1 = %f\n", output[0]);
printf("output2 = %f\n", output[1]);


return 0;
}

最佳答案

我用这种方法解决了这个问题。
像往常一样构建解释器:

// Load model
std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile(filename);
TFLITE_MINIMAL_CHECK(model != nullptr);

// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
compute_engine::tflite::RegisterLCECustomOps(&resolver);
enter code here

InterpreterBuilder builder(*model, resolver);
std::unique_ptr<Interpreter> interpreter;
builder(&interpreter);
TFLITE_MINIMAL_CHECK(interpreter != nullptr);

TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);
要获取输入形状:
const std::vector<int>& t_inputs = interpreter->inputs();
TfLiteTensor* tensor = interpreter->tensor(t_inputs[0]);

// input size - for a cnn is four: (batch_size, h, w, channels)
input_size = tensor->dims->size;

batch_size = tensor->dims->data[0];
h = tensor->dims->data[1];
w = tensor->dims->data[2];
channels = tensor->dims->data[3];
这对我有用。我希望它也会对你有好处。
引用: https://www.tensorflow.org/lite/microcontrollers/get_started

关于python - 在 tensorflow-lite C++ 中输入图像,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64779433/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com