gpt4 book ai didi

使用 APIM 验证 Azure OpenAI Swagger 失败

转载 作者:行者123 更新时间:2023-12-03 06:08:18 29 4
gpt4 key购买 nike

我正在将 Swagger 转换为 Azure OpenAI API Version 2023-07-01-preview从 json 到 yaml

我的 Swagger 看起来像这样

openapi: 3.0.1
info:
title: OpenAI Models API
description: ''
version: '123'
servers:
- url: https://def.com/openai
paths:
/gpt-35-turbo/chat/completions:
post:
tags:
- openai
summary: Creates a completion for the chat message
description: gpt-35-turbo-chat-completion
operationId: GPT_35_Turbo_ChatCompletions_Create
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/createChatCompletionRequest'
responses:
'200':
description: OK
content:
application/json:
schema:
$ref: '#/components/schemas/createChatCompletionResponse'
headers:
apim-request-id:
description: Request ID for troubleshooting purposes
schema:
type: string
default:
description: Service unavailable
content:
application/json:
schema:
$ref: '#/components/schemas/errorResponse'
headers:
apim-request-id:
description: Request ID for troubleshooting purposes
schema:
type: string
components:
schemas:
errorResponse:
type: object
properties:
error:
$ref: '#/components/schemas/error'
errorBase:
type: object
properties:
code:
type: string
message:
type: string
error:
type: object
allOf:
- $ref: '#/components/schemas/errorBase'
properties:
code:
type: string
message:
type: string
param:
type: string
type:
type: string
inner_error:
$ref: '#/components/schemas/innerError'
innerError:
description: Inner error with additional details.
type: object
properties:
code:
$ref: '#/components/schemas/innerErrorCode'
content_filter_results:
$ref: '#/components/schemas/contentFilterResults'
innerErrorCode:
description: Error codes for the inner error object.
enum:
- ResponsibleAIPolicyViolation
type: string
x-ms-enum:
name: InnerErrorCode
modelAsString: true
values:
- value: ResponsibleAIPolicyViolation
description: The prompt violated one of more content filter rules.
contentFilterResult:
type: object
properties:
severity:
type: string
enum:
- safe
- low
- medium
- high
x-ms-enum:
name: ContentFilterSeverity
modelAsString: true
values:
- value: safe
description: >-
General content or related content in generic or non-harmful
contexts.
- value: low
description: Harmful content at a low intensity and risk level.
- value: medium
description: Harmful content at a medium intensity and risk level.
- value: high
description: Harmful content at a high intensity and risk level.
filtered:
type: boolean
required:
- severity
- filtered
contentFilterResults:
type: object
description: >-
Information about the content filtering category (hate, sexual,
violence, self_harm), if it has been detected, as well as the severity
level (very_low, low, medium, high-scale that determines the intensity
and risk level of harmful content) and if it has been filtered or not.
properties:
sexual:
$ref: '#/components/schemas/contentFilterResult'
violence:
$ref: '#/components/schemas/contentFilterResult'
hate:
$ref: '#/components/schemas/contentFilterResult'
self_harm:
$ref: '#/components/schemas/contentFilterResult'
error:
$ref: '#/components/schemas/errorBase'
promptFilterResult:
type: object
description: Content filtering results for a single prompt in the request.
properties:
prompt_index:
type: integer
content_filter_results:
$ref: '#/components/schemas/contentFilterResults'
promptFilterResults:
type: array
description: >-
Content filtering results for zero or more prompts in the request. In a
streaming request, results for different prompts may arrive at different
times or in different orders.
items:
$ref: '#/components/schemas/promptFilterResult'
createChatCompletionRequest:
type: object
allOf:
- $ref: '#/components/schemas/chatCompletionsRequestCommon'
- properties:
messages:
description: >-
A list of messages comprising the conversation so far. [Example
Python
code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
type: array
minItems: 1
items:
$ref: '#/components/schemas/chatCompletionRequestMessage'
functions:
description: A list of functions the model may generate JSON inputs for.
type: array
minItems: 1
items:
$ref: '#/components/schemas/chatCompletionFunctions'
function_call:
description: >-
Controls how the model responds to function calls. "none" means
the model does not call a function, and responds to the
end-user. "auto" means the model can pick between an end-user or
calling a function. Specifying a particular function via
`{"name":\ "my_function"}` forces the model to call that
function. "none" is the default when no functions are present.
"auto" is the default if functions are present.
oneOf:
- type: string
enum:
- none
- auto
- type: object
properties:
name:
type: string
description: The name of the function to call.
required:
- name
'n':
type: integer
minimum: 1
maximum: 128
default: 1
example: 1
nullable: true
description: >-
How many chat completion choices to generate for each input
message.
required:
- messages
chatCompletionsRequestCommon:
type: object
properties:
temperature:
description: >-
What sampling temperature to use, between 0 and 2. Higher values
like 0.8 will make the output more random, while lower values like
0.2 will make it more focused and deterministic.

We generally recommend altering this or `top_p` but not both.
type: number
minimum: 0
maximum: 2
default: 1
example: 1
nullable: true
top_p:
description: >-
An alternative to sampling with temperature, called nucleus
sampling, where the model considers the results of the tokens with
top_p probability mass. So 0.1 means only the tokens comprising the
top 10% probability mass are considered.

We generally recommend altering this or `temperature` but not both.
type: number
minimum: 0
maximum: 1
default: 1
example: 1
nullable: true
stop:
description: Up to 4 sequences where the API will stop generating further tokens.
oneOf:
- type: string
nullable: true
- type: array
items:
type: string
nullable: false
minItems: 1
maxItems: 4
description: Array minimum size of 1 and maximum of 4
default: null
max_tokens:
description: >-
The maximum number of tokens allowed for the generated answer. By
default, the number of tokens the model can return will be (4096 -
prompt tokens).
type: integer
default: 4096
presence_penalty:
description: >-
Number between -2.0 and 2.0. Positive values penalize new tokens
based on whether they appear in the text so far, increasing the
model's likelihood to talk about new topics.
type: number
default: 0
minimum: -2
maximum: 2
frequency_penalty:
description: >-
Number between -2.0 and 2.0. Positive values penalize new tokens
based on their existing frequency in the text so far, decreasing the
model's likelihood to repeat the same line verbatim.
type: number
default: 0
minimum: -2
maximum: 2
logit_bias:
description: >-
Modify the likelihood of specified tokens appearing in the
completion. Accepts a json object that maps tokens (specified by
their token ID in the tokenizer) to an associated bias value from
-100 to 100. Mathematically, the bias is added to the logits
generated by the model prior to sampling. The exact effect will vary
per model, but values between -1 and 1 should decrease or increase
likelihood of selection; values like -100 or 100 should result in a
ban or exclusive selection of the relevant token.
type: object
nullable: true
user:
description: >-
A unique identifier representing your end-user, which can help Azure
OpenAI to monitor and detect abuse.
type: string
example: user-1234
nullable: false
chatCompletionRequestMessage:
type: object
properties:
role:
type: string
enum:
- system
- user
- assistant
- function
description: >-
The role of the messages author. One of `system`, `user`,
`assistant`, or `function`.
content:
type: string
description: >-
The contents of the message. `content` is required for all messages
except assistant messages with function calls.
name:
type: string
description: >-
The name of the author of this message. `name` is required if role
is `function`, and it should be the name of the function whose
response is in the `content`. May contain a-z, A-Z, 0-9, and
underscores, with a maximum length of 64 characters.
function_call:
type: object
description: >-
The name and arguments of a function that should be called, as
generated by the model.
properties:
name:
type: string
description: The name of the function to call.
arguments:
type: string
description: >-
The arguments to call the function with, as generated by the
model in JSON format. Note that the model does not always
generate valid JSON, and may hallucinate parameters not defined
by your function schema. Validate the arguments in your code
before calling your function.
required:
- role
createChatCompletionResponse:
type: object
allOf:
- $ref: '#/components/schemas/chatCompletionsResponseCommon'
- properties:
prompt_filter_results:
$ref: '#/components/schemas/promptFilterResults'
choices:
type: array
items:
type: object
allOf:
- $ref: '#/components/schemas/chatCompletionChoiceCommon'
- properties:
message:
$ref: '#/components/schemas/chatCompletionResponseMessage'
content_filter_results:
$ref: '#/components/schemas/contentFilterResults'
required:
- id
- object
- created
- model
- choices
chatCompletionFunctions:
type: object
properties:
name:
type: string
description: >-
The name of the function to be called. Must be a-z, A-Z, 0-9, or
contain underscores and dashes, with a maximum length of 64.
description:
type: string
description: The description of what the function does.
parameters:
$ref: '#/components/schemas/chatCompletionFunctionParameters'
required:
- name
chatCompletionFunctionParameters:
type: object
description: >-
The parameters the functions accepts, described as a JSON Schema object.
See the [guide](/docs/guides/gpt/function-calling) for examples, and the
[JSON Schema
reference](https://json-schema.org/understanding-json-schema/) for
documentation about the format.
additionalProperties: true
chatCompletionsResponseCommon:
type: object
properties:
id:
type: string
object:
type: string
created:
type: integer
format: unixtime
model:
type: string
usage:
type: object
properties:
prompt_tokens:
type: integer
completion_tokens:
type: integer
total_tokens:
type: integer
required:
- prompt_tokens
- completion_tokens
- total_tokens
required:
- id
- object
- created
- model
chatCompletionChoiceCommon:
type: object
properties:
index:
type: integer
finish_reason:
type: string
chatCompletionResponseMessage:
type: object
properties:
role:
type: string
enum:
- system
- user
- assistant
- function
description: The role of the author of this message.
content:
type: string
description: The contents of the message.
function_call:
type: object
description: >-
The name and arguments of a function that should be called, as
generated by the model.
properties:
name:
type: string
description: The name of the function to call.
arguments:
type: string
description: >-
The arguments to call the function with, as generated by the
model in JSON format. Note that the model does not always
generate valid JSON, and may hallucinate parameters not defined
by your function schema. Validate the arguments in your code
before calling your function.
required:
- role
securitySchemes:
apiKeyHeader:
type: apiKey
name: Ocp-Apim-Subscription-Key
in: header
apiKeyQuery:
type: apiKey
name: subscription-key
in: query
security:
- apiKeyHeader: [ ]
- apiKeyQuery: [ ]

我在 azure apim 中使用了它并验证了这样的内容

<validate-content unspecified-content-type-action="ignore" max-size="102400" size-exceeded-action="detect" errors-variable-name="requestBodyValidation">
<content type="application/json" validate-as="json" action="prevent" allow-additional-properties="false" />
</validate-content>

现在我尝试像实际属性(property)一样提出请求

{
"messages": [
{
"role": "user",
"content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."
}
],
"temperature": 1,
"top_p": 1,
"stop": "",
"max_tokens": 2000,
"presence_penalty": 0,
"frequency_penalty": 0,
"logit_bias": {},
"user": "user-1234",
"n": 1,
"function_call" : "auto",
"functions" : [
{
"name": "search_hotels",
"description": "Retrieves hotels from the search index based on the parameters provided",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location of the hotel (i.e. Seattle, WA)"
},
"max_price": {
"type": "number",
"description": "The maximum price for the hotel"
},
"features": {
"type": "string",
"description": "A comma separated list of features (i.e. beachfront, free wifi, etc.)"
}
},
"required": ["location"]
}
}
]
}

APIM 给出的错误类似于

{
"statusCode": 400,
"message": "Body of the request does not conform to the definition which is associated with the content type application/json. JSON does not match all schemas from 'allOf'. Invalid schema indexes: 0, 1. Line: 42, Position: 1"
}

但是当我直接点击 azure openai 时,相同的请求正在工作。

这里可能出现什么问题?

最佳答案

我相信你的问题是这一行allow-additional-properties="false"

allow-additional-properties Boolean. For a JSON schema, specifies whether to implement a runtime override of the additionalProperties value configured in the schema:

  • true:允许在请求或响应正文中使用其他属性,即使 JSON 架构的additionalProperties 字段配置为不允许其他属性。
  • false:不允许在请求或响应正文中使用其他属性,即使 JSON 架构的additionalProperties 字段配置为允许其他属性也是如此。

If the attribute isn't specified, the policy validates additional properties according to configuration of the additionalProperties field in the schema.

来源:https://learn.microsoft.com/en-us/azure/api-management/validate-content-policy#content-attributes

此属性会覆盖您的 JSON 架构。即使您的 allOf 定义不使用 additionalProperties: false,apim 也会将此约束注入(inject)到根架构中,这会转换为

{
"type": "object",
"additionalProperties": false,
"allOf": [{...}, {...}]
}

此架构不允许验证任何属性,因为根处未定义任何属性。

在这种情况下,唯一有效的模式是

{}

OR

true

有几种方法可以解决这个问题,但恕我直言,最好的选择是使用模式定义,而不是 apim 属性,因为您在未定义的模式上引入了约束。如果其他人要查看架构,他们会遇到与您相同的问题。

这对您来说可能会很棘手,具体取决于 APIM 支持哪个版本的 JSON 架构以及您正在使用哪个版本。

在大多数情况下,Draft-04 - 07 需要对架构进行一些调整,以实现将 allOfadditionalProperties": false 结合使用的所需行为

  • 关闭 apim 验证中的内容属性
  • 将第一深度子模式的所有属性添加到具有空模式的根中。这将允许验证器在根级别识别这些属性以满足additionalProperties
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"additionalProperties": false,
"properties": {
"messages": {},
"temperature": {},
"top_p": {},
"stop": {},
"max_tokens": {},
"presence_penalty": {},
"frequency_penalty": {},
"logit_bias": {},
"user": { },
"n": { },
"function_call": { },
"functions": { }
},
"allOf": [
{
"type": "object",
"properties": {
"temperature": {},
"top_p": {},
"stop": {},
"max_tokens": {},
"presence_penalty": {},
"frequency_penalty": {},
"logit_bias": {},
"user": {}
}
},
{
"type": "object",
"properties": {
"messages": {},
"n": {},
"function_call": {},
"functions": {}
}
}
]
}

如果您使用的是 JSON 架构草案 2019-09 或更高版本,则可以使用较新的关键字 unevaluatedProperties 来自动执行上述行为。

{
"$schema": "https://json-schema.org/draft/2019-09/schema",
"type": "object",
"unevaluatedProperties": false,
"allOf": [
{
"type": "object",
"properties": {
"temperature": {},
"top_p": {},
"stop": {},
"max_tokens": {},
"presence_penalty": {},
"frequency_penalty": {},
"logit_bias": {},
"user": {}
}
},
{
"type": "object",
"properties": {
"messages": {},
"n": {},
"function_call": {},
"functions": {}
}
}
]
}

此示例失败:

{
"messages": [
{
"role": "user",
"content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."
}
],
"stackOverflow": -1
}
Invalid

# fails schema constraint https://json-schema.hyperjump.io/schema#/unevaluatedProperties

#/stackOverflow fails schema constraint https://json-schema.hyperjump.io/schema#/unevaluatedProperties

关于使用 APIM 验证 Azure OpenAI Swagger 失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/77082801/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com