gpt4 book ai didi

tensorflow-serving - 如何使模型为带有 base64 编码图像的 TensorFlow Serving REST 接口(interface)做好准备?

转载 作者:行者123 更新时间:2023-12-01 03:05:41 27 4
gpt4 key购买 nike

我的理解是,我应该能够从 Google 的 AI Hub 中获取 TensorFlow 模型,将其部署到 TensorFlow Serving 并使用它通过使用 curl 的 REST 请求发布图像来进行预测。

目前我在 AI Hub 上找不到任何 bbox 预测器,但我确实在 TensorFlow 模型动物园中找到了一个:

http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz

我已将模型部署到 TensorFlow 服务,但文档不清楚 REST 请求的 JSON 中应包含的确切内容。

我的理解是

  • 模型的 SignatureDefinition 决定 JSON 的样子
  • 我应该对图像进行base64编码

  • 我能够像这样获得模型的签名定义:
    >python tensorflow/tensorflow/python/tools/saved_model_cli.py show --dir /Users/alexryan/alpine/git/tfserving-tutorial3/model-volume/models/bbox/1/ --all

    MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

    signature_def['serving_default']:
    The given SavedModel SignatureDef contains the following input(s):
    inputs['in'] tensor_info:
    dtype: DT_UINT8
    shape: (-1, -1, -1, 3)
    name: image_tensor:0
    The given SavedModel SignatureDef contains the following output(s):
    outputs['out'] tensor_info:
    dtype: DT_FLOAT
    shape: unknown_rank
    name: detection_boxes:0
    Method name is: tensorflow/serving/predict

    我认为这里的形状信息告诉我模型可以处理任何尺寸的图像?

    输入层在 Tensorboard 中如下所示:
    enter image description here

    但是如何将此 SignatureDefinition 转换为有效的 JSON 请求?
    我假设我应该使用预测 API ...

    Google's doc说……

    URL

    POST http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]:predict

    /versions/${MODEL_VERSION} is optional. If omitted the latest version is used.

    Request format
    The request body for predict API must be JSON object formatted as follows:

    {
    // (Optional) Serving signature to use.
    // If unspecifed default serving signature is used.
    "signature_name": <string>,
    // Input Tensors in row ("instances") or columnar ("inputs") format.
    // A request can have either of them but NOT both.
    "instances": <value>|<(nested)list>|<list-of-objects>
    "inputs": <value>|<(nested)list>|<object>
    }

    Encoding binary values JSON uses UTF-8 encoding. If you have input feature or tensor values that need to be binary (like image bytes), you must Base64 encode the data and encapsulate it in a JSON object having b64 as the key as follows:

    { "b64": "base64 encoded string" }

    You can specify this object as a value for an input feature or tensor. The same format is used to encode output response as well.

    A classification request with image (binary data) and caption features is shown below:


    {   "signature_name": "classify_objects",   "examples": [
    {
    "image": { "b64": "aW1hZ2UgYnl0ZXM=" },
    "caption": "seaside"
    },
    {
    "image": { "b64": "YXdlc29tZSBpbWFnZSBieXRlcw==" },
    "caption": "mountains"
    } ] }

    不确定性包括:
  • 我应该在我的 JSON 中使用“实例”
  • 我应该对 JPG 或 PNG 或其他东西进行 base64 编码吗?
  • 图像应该是特定的
    宽度和高度?

  • Serving Image-Based Deep Learning Models with TensorFlow-Serving’s RESTful API建议采用这种格式:
    {
    "instances": [
    {"b64": "iVBORw"},
    {"b64": "pT4rmN"},
    {"b64": "w0KGg2"}
    ]
    }

    我使用了这张图片:
    https://tensorflow.org/images/blogs/serving/cat.jpg

    和base64编码它像这样:
      # Download the image
    dl_request = requests.get(IMAGE_URL, stream=True)
    dl_request.raise_for_status()

    # Compose a JSON Predict request (send JPEG image in base64).
    jpeg_bytes = base64.b64encode(dl_request.content).decode('utf-8')
    predict_request = '{"instances" : [{"b64": "%s"}]}' % jpeg_bytes

    但是当我使用 curl 像这样发布 base64 编码的图像时:
    {"instances" : [{"b64": "/9j/4AAQSkZJRgABAQAASABIAAD/4QBYRXhpZgAATU0AKgAA
    ...
    KACiiigAooooAKKKKACiiigAooooA//Z"}]}

    我得到这样的回应:
    >./test_local_tfs.sh 
    HEADER=|Content-Type:application/json;charset=UTF-8|
    URL=|http://127.0.0.1:8501/v1/models/saved_model/versions/1:predict|
    * Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to 127.0.0.1 (127.0.0.1) port 8501 (#0)
    > POST /v1/models/saved_model/versions/1:predict HTTP/1.1
    > Host: 127.0.0.1:8501
    > User-Agent: curl/7.54.0
    > Accept: */*
    > Content-Type:application/json;charset=UTF-8
    > Content-Length: 85033
    > Expect: 100-continue
    >
    < HTTP/1.1 100 Continue
    * We are completely uploaded and fine
    < HTTP/1.1 400 Bad Request
    < Content-Type: application/json
    < Date: Tue, 17 Sep 2019 10:47:18 GMT
    < Content-Length: 85175
    <
    { "error": "Failed to process element: 0 of \'instances\' list. Error: Invalid argument: JSON Value: {\n \"b64\": \"/9j/4AAQSkZJRgABAQAAS
    ...
    ooooA//Z\"\n} Type: Object is not of expected type: uint8" }

    我试过像这样将同一文件的本地版本转换为base64(确认dtype是uint8)......
      img = cv2.imread('cat.jpg')   
    print('dtype: ' + str(img.dtype))
    _, buf = cv2.imencode('.jpg', img)
    jpeg_bytes = base64.b64encode(buf).decode('utf-8')
    predict_request = '{"instances" : [{"b64": "%s"}]}' % jpeg_bytes

    但是发布这个 JSON 会产生同样的错误。

    但是,当 json 像这样格式化时......
    {'instances': [[[[112, 71, 48], [104, 63, 40], [107, 70, 20], [108, 72, 21], [109, 77, 0], [106, 75, 0], [92, 66, 0], [106, 80, 0], [101, 80, 0], [98, 77, 0], [100, 75, 0], [104, 80, 0], [114, 88, 17], [94, 68, 0], [85, 54, 0], [103, 72, 11], [93, 62, 0], [120, 89, 25], [131, 101, 37], [125, 95, 31], [119, 91, 27], [121, 93, 29], [133, 105, 40], [119, 91, 27], [119, 96, 56], [120, 97, 57], [119, 96, 53], [102, 78, 36], [132, 103, 44], [117, 88, 28], [125, 89, 4], [128, 93, 8], [133, 94, 0], [126, 87, 0], [110, 74, 0], [123, 87, 2], [120, 92, 30], [124, 95, 33], [114, 90, 32], 
    ...
    , [43, 24, 33], [30, 17, 36], [24, 11, 30], [29, 20, 38], [37, 28, 46]]]]}

    ... 有用。
    问题是这个 json 文件的大小 > 11 MB。

    如何使 json 的 base64 编码版本工作?

    更新:似乎我们必须编辑预训练模型以在输入层接受 base64 图像

    本文介绍了如何编辑模型...
    Medium: Serving Image-Based Deep Learning Models with TensorFlow-Serving’s RESTful API
    ...不幸的是,它假设我们可以访问生成模型的代码。

    user260826 的解决方案提供了一种使用估计器的解决方法,但它假设模型是 keras 模型。在这种情况下不正确。

    是否有一种通用方法可以使模型为 TensorFlow Serving REST 接口(interface)做好准备,该接口(interface)具有适用于任何 TensorFlow 模型格式的 base64 编码图像?

    最佳答案

    第一步是以适当的格式导出训练好的模型。使用export_inference_graph.py像这样

    python export_inference_graph \
    --input_type encoded_image_string_tensor \
    --pipeline_config_path path/to/ssd_inception_v2.config \
    --trained_checkpoint_prefix path/to/model.ckpt \
    --output_directory path/to/exported_model_directory

    在上面的代码片段中,指定

    --input_type encoded_image_string_tensor



    导出模型后,像往常一样使用新导出的模型运行 tensorflow 服务器。

    推理代码将如下所示:
    from __future__ import print_function
    import base64
    import requests

    SERVER_URL = 'http://localhost:8501/v1/models/vedNet:predict'

    IMAGE_URL = 'test_images/19_inp.jpg'


    def main():
    with open(IMAGE_URL, "rb") as image_file:
    jpeg_bytes = base64.b64encode(image_file.read()).decode('utf-8')
    predict_request = '{"instances" : [{"b64": "%s"}]}' % jpeg_bytes
    response = requests.post(SERVER_URL, predict_request)
    response.raise_for_status()
    prediction = response.json()['predictions'][0]

    if __name__ == '__main__':
    main()

    关于tensorflow-serving - 如何使模型为带有 base64 编码图像的 TensorFlow Serving REST 接口(interface)做好准备?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57964394/

    27 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com