- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在将 Swagger 转换为 Azure OpenAI API Version 2023-07-01-preview从 json 到 yaml
我的 Swagger 看起来像这样
openapi: 3.0.1
info:
title: OpenAI Models API
description: ''
version: '123'
servers:
- url: https://def.com/openai
paths:
/gpt-35-turbo/chat/completions:
post:
tags:
- openai
summary: Creates a completion for the chat message
description: gpt-35-turbo-chat-completion
operationId: GPT_35_Turbo_ChatCompletions_Create
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/createChatCompletionRequest'
responses:
'200':
description: OK
content:
application/json:
schema:
$ref: '#/components/schemas/createChatCompletionResponse'
headers:
apim-request-id:
description: Request ID for troubleshooting purposes
schema:
type: string
default:
description: Service unavailable
content:
application/json:
schema:
$ref: '#/components/schemas/errorResponse'
headers:
apim-request-id:
description: Request ID for troubleshooting purposes
schema:
type: string
components:
schemas:
errorResponse:
type: object
properties:
error:
$ref: '#/components/schemas/error'
errorBase:
type: object
properties:
code:
type: string
message:
type: string
error:
type: object
allOf:
- $ref: '#/components/schemas/errorBase'
properties:
code:
type: string
message:
type: string
param:
type: string
type:
type: string
inner_error:
$ref: '#/components/schemas/innerError'
innerError:
description: Inner error with additional details.
type: object
properties:
code:
$ref: '#/components/schemas/innerErrorCode'
content_filter_results:
$ref: '#/components/schemas/contentFilterResults'
innerErrorCode:
description: Error codes for the inner error object.
enum:
- ResponsibleAIPolicyViolation
type: string
x-ms-enum:
name: InnerErrorCode
modelAsString: true
values:
- value: ResponsibleAIPolicyViolation
description: The prompt violated one of more content filter rules.
contentFilterResult:
type: object
properties:
severity:
type: string
enum:
- safe
- low
- medium
- high
x-ms-enum:
name: ContentFilterSeverity
modelAsString: true
values:
- value: safe
description: >-
General content or related content in generic or non-harmful
contexts.
- value: low
description: Harmful content at a low intensity and risk level.
- value: medium
description: Harmful content at a medium intensity and risk level.
- value: high
description: Harmful content at a high intensity and risk level.
filtered:
type: boolean
required:
- severity
- filtered
contentFilterResults:
type: object
description: >-
Information about the content filtering category (hate, sexual,
violence, self_harm), if it has been detected, as well as the severity
level (very_low, low, medium, high-scale that determines the intensity
and risk level of harmful content) and if it has been filtered or not.
properties:
sexual:
$ref: '#/components/schemas/contentFilterResult'
violence:
$ref: '#/components/schemas/contentFilterResult'
hate:
$ref: '#/components/schemas/contentFilterResult'
self_harm:
$ref: '#/components/schemas/contentFilterResult'
error:
$ref: '#/components/schemas/errorBase'
promptFilterResult:
type: object
description: Content filtering results for a single prompt in the request.
properties:
prompt_index:
type: integer
content_filter_results:
$ref: '#/components/schemas/contentFilterResults'
promptFilterResults:
type: array
description: >-
Content filtering results for zero or more prompts in the request. In a
streaming request, results for different prompts may arrive at different
times or in different orders.
items:
$ref: '#/components/schemas/promptFilterResult'
createChatCompletionRequest:
type: object
allOf:
- $ref: '#/components/schemas/chatCompletionsRequestCommon'
- properties:
messages:
description: >-
A list of messages comprising the conversation so far. [Example
Python
code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
type: array
minItems: 1
items:
$ref: '#/components/schemas/chatCompletionRequestMessage'
functions:
description: A list of functions the model may generate JSON inputs for.
type: array
minItems: 1
items:
$ref: '#/components/schemas/chatCompletionFunctions'
function_call:
description: >-
Controls how the model responds to function calls. "none" means
the model does not call a function, and responds to the
end-user. "auto" means the model can pick between an end-user or
calling a function. Specifying a particular function via
`{"name":\ "my_function"}` forces the model to call that
function. "none" is the default when no functions are present.
"auto" is the default if functions are present.
oneOf:
- type: string
enum:
- none
- auto
- type: object
properties:
name:
type: string
description: The name of the function to call.
required:
- name
'n':
type: integer
minimum: 1
maximum: 128
default: 1
example: 1
nullable: true
description: >-
How many chat completion choices to generate for each input
message.
required:
- messages
chatCompletionsRequestCommon:
type: object
properties:
temperature:
description: >-
What sampling temperature to use, between 0 and 2. Higher values
like 0.8 will make the output more random, while lower values like
0.2 will make it more focused and deterministic.
We generally recommend altering this or `top_p` but not both.
type: number
minimum: 0
maximum: 2
default: 1
example: 1
nullable: true
top_p:
description: >-
An alternative to sampling with temperature, called nucleus
sampling, where the model considers the results of the tokens with
top_p probability mass. So 0.1 means only the tokens comprising the
top 10% probability mass are considered.
We generally recommend altering this or `temperature` but not both.
type: number
minimum: 0
maximum: 1
default: 1
example: 1
nullable: true
stop:
description: Up to 4 sequences where the API will stop generating further tokens.
oneOf:
- type: string
nullable: true
- type: array
items:
type: string
nullable: false
minItems: 1
maxItems: 4
description: Array minimum size of 1 and maximum of 4
default: null
max_tokens:
description: >-
The maximum number of tokens allowed for the generated answer. By
default, the number of tokens the model can return will be (4096 -
prompt tokens).
type: integer
default: 4096
presence_penalty:
description: >-
Number between -2.0 and 2.0. Positive values penalize new tokens
based on whether they appear in the text so far, increasing the
model's likelihood to talk about new topics.
type: number
default: 0
minimum: -2
maximum: 2
frequency_penalty:
description: >-
Number between -2.0 and 2.0. Positive values penalize new tokens
based on their existing frequency in the text so far, decreasing the
model's likelihood to repeat the same line verbatim.
type: number
default: 0
minimum: -2
maximum: 2
logit_bias:
description: >-
Modify the likelihood of specified tokens appearing in the
completion. Accepts a json object that maps tokens (specified by
their token ID in the tokenizer) to an associated bias value from
-100 to 100. Mathematically, the bias is added to the logits
generated by the model prior to sampling. The exact effect will vary
per model, but values between -1 and 1 should decrease or increase
likelihood of selection; values like -100 or 100 should result in a
ban or exclusive selection of the relevant token.
type: object
nullable: true
user:
description: >-
A unique identifier representing your end-user, which can help Azure
OpenAI to monitor and detect abuse.
type: string
example: user-1234
nullable: false
chatCompletionRequestMessage:
type: object
properties:
role:
type: string
enum:
- system
- user
- assistant
- function
description: >-
The role of the messages author. One of `system`, `user`,
`assistant`, or `function`.
content:
type: string
description: >-
The contents of the message. `content` is required for all messages
except assistant messages with function calls.
name:
type: string
description: >-
The name of the author of this message. `name` is required if role
is `function`, and it should be the name of the function whose
response is in the `content`. May contain a-z, A-Z, 0-9, and
underscores, with a maximum length of 64 characters.
function_call:
type: object
description: >-
The name and arguments of a function that should be called, as
generated by the model.
properties:
name:
type: string
description: The name of the function to call.
arguments:
type: string
description: >-
The arguments to call the function with, as generated by the
model in JSON format. Note that the model does not always
generate valid JSON, and may hallucinate parameters not defined
by your function schema. Validate the arguments in your code
before calling your function.
required:
- role
createChatCompletionResponse:
type: object
allOf:
- $ref: '#/components/schemas/chatCompletionsResponseCommon'
- properties:
prompt_filter_results:
$ref: '#/components/schemas/promptFilterResults'
choices:
type: array
items:
type: object
allOf:
- $ref: '#/components/schemas/chatCompletionChoiceCommon'
- properties:
message:
$ref: '#/components/schemas/chatCompletionResponseMessage'
content_filter_results:
$ref: '#/components/schemas/contentFilterResults'
required:
- id
- object
- created
- model
- choices
chatCompletionFunctions:
type: object
properties:
name:
type: string
description: >-
The name of the function to be called. Must be a-z, A-Z, 0-9, or
contain underscores and dashes, with a maximum length of 64.
description:
type: string
description: The description of what the function does.
parameters:
$ref: '#/components/schemas/chatCompletionFunctionParameters'
required:
- name
chatCompletionFunctionParameters:
type: object
description: >-
The parameters the functions accepts, described as a JSON Schema object.
See the [guide](/docs/guides/gpt/function-calling) for examples, and the
[JSON Schema
reference](https://json-schema.org/understanding-json-schema/) for
documentation about the format.
additionalProperties: true
chatCompletionsResponseCommon:
type: object
properties:
id:
type: string
object:
type: string
created:
type: integer
format: unixtime
model:
type: string
usage:
type: object
properties:
prompt_tokens:
type: integer
completion_tokens:
type: integer
total_tokens:
type: integer
required:
- prompt_tokens
- completion_tokens
- total_tokens
required:
- id
- object
- created
- model
chatCompletionChoiceCommon:
type: object
properties:
index:
type: integer
finish_reason:
type: string
chatCompletionResponseMessage:
type: object
properties:
role:
type: string
enum:
- system
- user
- assistant
- function
description: The role of the author of this message.
content:
type: string
description: The contents of the message.
function_call:
type: object
description: >-
The name and arguments of a function that should be called, as
generated by the model.
properties:
name:
type: string
description: The name of the function to call.
arguments:
type: string
description: >-
The arguments to call the function with, as generated by the
model in JSON format. Note that the model does not always
generate valid JSON, and may hallucinate parameters not defined
by your function schema. Validate the arguments in your code
before calling your function.
required:
- role
securitySchemes:
apiKeyHeader:
type: apiKey
name: Ocp-Apim-Subscription-Key
in: header
apiKeyQuery:
type: apiKey
name: subscription-key
in: query
security:
- apiKeyHeader: [ ]
- apiKeyQuery: [ ]
我在 azure apim 中使用了它并验证了这样的内容
<validate-content unspecified-content-type-action="ignore" max-size="102400" size-exceeded-action="detect" errors-variable-name="requestBodyValidation">
<content type="application/json" validate-as="json" action="prevent" allow-additional-properties="false" />
</validate-content>
现在我尝试像实际属性(property)一样提出请求
{
"messages": [
{
"role": "user",
"content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."
}
],
"temperature": 1,
"top_p": 1,
"stop": "",
"max_tokens": 2000,
"presence_penalty": 0,
"frequency_penalty": 0,
"logit_bias": {},
"user": "user-1234",
"n": 1,
"function_call" : "auto",
"functions" : [
{
"name": "search_hotels",
"description": "Retrieves hotels from the search index based on the parameters provided",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location of the hotel (i.e. Seattle, WA)"
},
"max_price": {
"type": "number",
"description": "The maximum price for the hotel"
},
"features": {
"type": "string",
"description": "A comma separated list of features (i.e. beachfront, free wifi, etc.)"
}
},
"required": ["location"]
}
}
]
}
APIM 给出的错误类似于
{
"statusCode": 400,
"message": "Body of the request does not conform to the definition which is associated with the content type application/json. JSON does not match all schemas from 'allOf'. Invalid schema indexes: 0, 1. Line: 42, Position: 1"
}
但是当我直接点击 azure openai 时,相同的请求正在工作。
这里可能出现什么问题?
最佳答案
我相信你的问题是这一行allow-additional-properties="false"
allow-additional-properties Boolean. For a JSON schema, specifies whether to implement a runtime override of the additionalProperties value configured in the schema:
If the attribute isn't specified, the policy validates additional properties according to configuration of the additionalProperties field in the schema.
来源:https://learn.microsoft.com/en-us/azure/api-management/validate-content-policy#content-attributes
此属性会覆盖您的 JSON 架构。即使您的 allOf
定义不使用 additionalProperties: false
,apim 也会将此约束注入(inject)到根架构中,这会转换为
{
"type": "object",
"additionalProperties": false,
"allOf": [{...}, {...}]
}
此架构不允许验证任何属性,因为根处未定义任何属性。
在这种情况下,唯一有效的模式是
{}
OR
true
有几种方法可以解决这个问题,但恕我直言,最好的选择是使用模式定义,而不是 apim 属性,因为您在未定义的模式上引入了约束。如果其他人要查看架构,他们会遇到与您相同的问题。
这对您来说可能会很棘手,具体取决于 APIM 支持哪个版本的 JSON 架构以及您正在使用哪个版本。
在大多数情况下,Draft-04 - 07 需要对架构进行一些调整,以实现将 allOf
与 additionalProperties": false
结合使用的所需行为
additionalProperties
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"additionalProperties": false,
"properties": {
"messages": {},
"temperature": {},
"top_p": {},
"stop": {},
"max_tokens": {},
"presence_penalty": {},
"frequency_penalty": {},
"logit_bias": {},
"user": { },
"n": { },
"function_call": { },
"functions": { }
},
"allOf": [
{
"type": "object",
"properties": {
"temperature": {},
"top_p": {},
"stop": {},
"max_tokens": {},
"presence_penalty": {},
"frequency_penalty": {},
"logit_bias": {},
"user": {}
}
},
{
"type": "object",
"properties": {
"messages": {},
"n": {},
"function_call": {},
"functions": {}
}
}
]
}
如果您使用的是 JSON 架构草案 2019-09 或更高版本,则可以使用较新的关键字 unevaluatedProperties
来自动执行上述行为。
{
"$schema": "https://json-schema.org/draft/2019-09/schema",
"type": "object",
"unevaluatedProperties": false,
"allOf": [
{
"type": "object",
"properties": {
"temperature": {},
"top_p": {},
"stop": {},
"max_tokens": {},
"presence_penalty": {},
"frequency_penalty": {},
"logit_bias": {},
"user": {}
}
},
{
"type": "object",
"properties": {
"messages": {},
"n": {},
"function_call": {},
"functions": {}
}
}
]
}
此示例失败:
{
"messages": [
{
"role": "user",
"content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."
}
],
"stackOverflow": -1
}
Invalid
# fails schema constraint https://json-schema.hyperjump.io/schema#/unevaluatedProperties
#/stackOverflow fails schema constraint https://json-schema.hyperjump.io/schema#/unevaluatedProperties
关于使用 APIM 验证 Azure OpenAI Swagger 失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/77082801/
我收到以下错误:模块“openai”没有属性“ChatCompletion” 我检查了其他帖子。都在说升级OpenAI Python包或者升级Python。我都做了,但没有修复它。 Python:3.
我收到以下错误:模块“openai”没有属性“ChatCompletion” 我检查了其他帖子。都在说升级OpenAI Python包或者升级Python。我都做了,但没有修复它。 Python:3.
我有一个用例,非常需要来自 OpenAI API 的完全确定性响应。然而,玩弄温度似乎无法产生完全的决定论。 import openai openai.organization = "org-..."
OpenAI api 包含一个微调服务,将任务分为“提示”和“完成” https://platform.openai.com/docs/guides/fine-tuning 文档说准确度指标是根据完成
我通过openai的text-davinci-003可以正常返回对话信息,但是目前无法实现上下文关联功能。我搜索了一下,发现有一个“conversation_id”参数,但是添加该参数后,API返回“
我有一个用例,非常需要来自 OpenAI API 的完全确定性响应。然而,玩弄温度似乎无法产生完全的决定论。 import openai openai.organization = "org-..."
OpenAI api 包含一个微调服务,将任务分为“提示”和“完成” https://platform.openai.com/docs/guides/fine-tuning 文档说准确度指标是根据完成
我通过openai的text-davinci-003可以正常返回对话信息,但是目前无法实现上下文关联功能。我搜索了一下,发现有一个“conversation_id”参数,但是添加该参数后,API返回“
我想使用 openai.embeddings_utils import get_embeddings所以已经安装了openai Name: openai Version: 0.26.5 Summary
当我使用 GPT3 的 playground 时,我经常得到带有编号列表和段落格式的结果,如下所示: Here's what the above class is doing: 1. It creat
当我使用 GPT3 的 playground 时,我经常得到带有编号列表和段落格式的结果,如下所示: Here's what the above class is doing: 1. It creat
我想使用 openai.embeddings_utils import get_embeddings所以已经安装了openai Name: openai Version: 0.26.5 Summary
OpenAI/chat GPT也支持docx/pdf文件上传吗?。我想上传多个文件到openAI/chatGPT。我在https://platform.openai.com/docs/api-refe
openAI/chatGPT也支持docx/pdf文件上传吗? 我想上传多个文件到 openAI/chatGPT。我尝试了 https://platform.openai.com/docs/api-r
openAI/chatGPT也支持docx/pdf文件上传吗? 我想上传多个文件到 openAI/chatGPT。我尝试了 https://platform.openai.com/docs/api-r
如果我们查看环境的预览,它们会在右下角的动画中显示剧集的增加。 https://gym.openai.com/envs/CartPole-v1/ .是否有明确显示的命令? 最佳答案 我认为 Ope
是否有人从使用 text-embedding-ada-002 的 Azure OpenAI 嵌入部署中获得的结果与 OpenAI 的结果不同?相同的文本,相同的模型,结果在向量空间中相差相当远。 对于
关闭。这个问题需要debugging details .它目前不接受答案。 编辑问题以包含 desired behavior, a specific problem or error, and th
我正在学习gpt微调 我成功运行了这个命令:pip install --upgrade openai 我无法运行此命令:export OPENAI_API_KEY="sk-xxxxxxxxxxxxxx
如何解决Openai API 不断输出的问题,比如让gpt api 写一篇文章。如果内容中断,可以继续提问,从而继续输出以上内容。这在ChatGPT中很容易做到,但是Openai API加上上面的提示
我是一名优秀的程序员,十分优秀!