gpt4 book ai didi

如何实现LLM的通用function-calling能力?

转载 作者:撒哈拉 更新时间:2024-12-09 12:25:25 58 4
gpt4 key购买 nike

众所周知,LLM的函数function-calling能力很强悍,解决了大模型与实际业务系统的交互问题。其本质就是函数调用.

从openai官网摘图:

  。

简而言之:
  1. LLM起到决策的作用,告知业务系统应该调用什么函数,以及入参是什么.

  2. 业务系统负责实现对应的函数(比如本地实现,或者调用其他系统提供的服务),并且将函数的响应结果再次抛给LLM.

  3. LLM根据响应结果,组织自然语言,继续与业务系统进行交互。

在这里,有很多小伙伴会有一个误区:误以为函数调用是有LLM本身执行的。其实,LLM仅仅做决策,而实际的调用是由业务系统完成的.

现阶段,function-calling能力的实现有两种主流方式:
  1. LLM本身支持.

  2. 利用Prompt模板实现,典型如ReAct模板.

  。

在实际的应用过程中,我们还要解决另一个重要问题:

function-calling触发机制是怎样的?也即:何时要使用function-calling能力,何时不应该使用?

这个问题的处理方式,对于整体流程的运行至关重要.

此时,我们可以使用特定Prompt来解决该问题:

You have access to the following tools:
{json.dumps(tools)}
You can select one of the above tools or just response user's content and respond with only a JSON object matching the following schema:
{{
  "tool": <name of the selected tool>,
  "tool_input": <parameters for the selected tool, matching the tool'
s JSON schema>,
  "message": <direct response users content>}

该Prompt告知了LLM:如果需要使用function-calling能力,那么就从tools(tools是预定义的functions)中选取一个最匹配的函数;如果不需要,就用自然语言与用户交互,此时与正常的对话流程无异。输出的格式固定为json,方便解析.

图片

图片

由此,我们受到启发:只要LLM基座够强(能够严格遵循Prompt响应诉求),即使LLM本身不支持function-calling,我们也可以自己实现function-calling,脱离对特定LLM的依赖! 。

拿到function-calling的结果后,若要用自然语言的形式输出结果,还要再调用一次LLM,对结果进行整合。此时可以使用另一个Prompt:

Please generate a natural language description based on the following question and answer.
Question: [Content of the question]
Answer: [Content of the answer]
Generated Description: The result of [key phrase from the question] is [answer].
If necessary, you can polish the description.Only output the Description, with Chinese language.

该Prompt的作用就是告诉LLM,你要根据我的问题和答案,用自然语言重新描述一遍。这里指定了中文输出,可根据实际需要进行调整.

以下是一个可运行的完整Python脚本:

import requests
import json
import random

# 预置函数定义
tools = [
    {
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city e.g. Beijing"
                },
                "unit": {
                    "type": "string",
                    "enum": [
                        "celsius"
                    ]
                }
            },
            "required": [
                "location"
            ]
        }
    },
    {
        "name": "calculator",
        "description": "计算器",
        "parameters": {
            "type": "int",
            "properties": {
                "a": {
                    "type": "int",
                    "description": "the first number"
                },
                "b": {
                    "type": "int",
                    "description": "the second number"
                }
            },
            "required": [
                "a",
                "b"
            ]
        }
    }
]

# 获取天气(随机返回,实际使用可以替换为api调用)
def get_current_weather(*args):
    # 定义可能的天气状态
    weather_conditions = ["sunny", "cloudy", "rainy", "snowy"]
    # 定义可能的温度范围
    temperature_min = -10  # 最低温度,摄氏度
    temperature_max = 35  # 最高温度,摄氏度
    # 随机选择一个天气状态
    condition = random.choice(weather_conditions)
    # 随机生成一个温度
    temperature = random.randint(temperature_min, temperature_max)
    # 返回一个描述当前天气的字符串
    return f"The weather of {args[0].get('location')} is {condition}, and the temperature is {temperature}°C."

def calculator(args):
    return sum(value for value in args.values() if isinstance(value, int))

# 函数映射集合
functions = {
    "get_current_weather": get_current_weather,
    "calculator": calculator,
}

# 驱动整体流程的入口prompt
entrance_prompt = f"""You have access to the following tools:
{json.dumps(tools)}
You can select one of the above tools or just response user's content and respond with only a JSON object matching the following schema:
{{
  "tool": <name of the selected tool>,
  "tool_input": <parameters for the selected tool, matching the tool's JSON schema>,
  "message": <direct response users content>
}}"""

# 请以自然语言的形式对结果进行描述
conformity_prompt = f"""
Please generate a natural language description based on the following question and answer.
Question: [Content of the question]
Answer: [Content of the answer]
Generated Description: The result of [key phrase from the question] is [answer].
If necessary, you can polish the description.
Only output the Description, with Chinese language.
"""

def extract_json(s):
    stack = 0
    start = s.find('{')
    if start == -1:
        return None

    for i in range(start, len(s)):
        if s[i] == '{':
            stack += 1
        elif s[i] == '}':
            stack -= 1
            if stack == 0:
                return s[start:i + 1]
    return None

# 结果包装器,type为func表示是函数调用返回的结果,default表示是自然语言结果。对于func返回的结果,会用LLM再次总结
class ResultWrapper:
    def __init__(self, type, result):
        self.type = type
        self.result = result

# 解析LLM返回的结果,如果有json则去解析json
def parse_result(res):
    json_str = extract_json(res["message"]["content"])
    if json_str is not None:
        obj = json.loads(json_str)
        if "tool" in obj:
            if obj["tool"] in functions:
                fun = functions[obj["tool"]]
                return ResultWrapper("func", fun(obj["tool_input"]))
            else:
                return ResultWrapper("default", obj["message"])
        else:
            return ResultWrapper("default", res["message"]["content"])
    else:
        return ResultWrapper("default", res["message"]["content"])

def invokeLLM(messages):
    url = "${domain}/v1/chat/completions" #需替换域名
    model = ""
    payload = {
        "model": model,
        "messages": messages,
    }
    payload = json.dumps(payload)
    headers = {
        'Content-Type': 'application/json'
    }
    print("PAYLOAD: ", payload)
    response = requests.request("POST", url, headers=headers, data=payload)
    print("RESPONSE: ", response.text)
    print("=======================================================================")
    resp = json.loads(response.text)
    return resp["choices"][0]


if __name__ == '__main__':
    while True:
        messages = [
            {
                "role": "system",
                "content": entrance_prompt
            }
        ]
        user_input = input('Enter a string: ')
        messages.append({
            "role": "user",
            "content": user_input
        })
        result_wrapper = parse_result(invokeLLM(messages))
        if result_wrapper.type == "func":
            messages = [
                {
                    "role": "user",
                    "content": f"{conformity_prompt}\n\nThe question:{user_input}\nThe answer:{result_wrapper.result}"
                }
            ]
            print("FINAL RESULT WITH FUNCTION CALL: ", parse_result(invokeLLM(messages)).result)
        else:            print("FINAL RESULT: ", result_wrapper.result

实验效果:

Enter a string: 你好
PAYLOAD: {"model": "", "messages": [{"role": "system", "content": "You have access to the following tools:\n[{\"name\": \"get_current_weather\", \"description\": \"Get the current weather in a given location\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"The city e.g. Beijing\"}, \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\"]}}, \"required\": [\"location\"]}}, {\"name\": \"calculator\", \"description\": \"\\u8ba1\\u7b97\\u5668\", \"parameters\": {\"type\": \"int\", \"properties\": {\"a\": {\"type\": \"int\", \"description\": \"the first number\"}, \"b\": {\"type\": \"int\", \"description\": \"the second number\"}}, \"required\": [\"a\", \"b\"]}}]\nYou can select one of the above tools or just response user's content and respond with only a JSON object matching the following schema:\n{\n \"tool\": <name of the selected tool>,\n \"tool_input\": <parameters for the selected tool, matching the tool's JSON schema>,\n \"message\": <direct response users content>\n}"}, {"role": "user", "content": "\u4f60\u597d"}]}
RESPONSE: {"model":"","object":"","choices":[{"index":0,"message":{"role":"assistant","content":"```json\n{\"tool\": null, \"tool_input\": null, \"message\": \"你好,有什么可以帮您的吗?\"}\n```","function_call":null},"finish_reason":"stop"}],"queueTime":0.0020923614501953125,"costTime":0.7685532569885254,"usage":{"prompt_token":244,"completion_token":29,"total_tokens":273}}
=======================================================================
FINAL RESULT: 你好,有什么可以帮您的吗?


Enter a string: 厦门天气如何?
PAYLOAD: {"model": "", "messages": [{"role": "system", "content": "You have access to the following tools:\n[{\"name\": \"get_current_weather\", \"description\": \"Get the current weather in a given location\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"The city e.g. Beijing\"}, \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\"]}}, \"required\": [\"location\"]}}, {\"name\": \"calculator\", \"description\": \"\\u8ba1\\u7b97\\u5668\", \"parameters\": {\"type\": \"int\", \"properties\": {\"a\": {\"type\": \"int\", \"description\": \"the first number\"}, \"b\": {\"type\": \"int\", \"description\": \"the second number\"}}, \"required\": [\"a\", \"b\"]}}]\nYou can select one of the above tools or just response user's content and respond with only a JSON object matching the following schema:\n{\n \"tool\": <name of the selected tool>,\n \"tool_input\": <parameters for the selected tool, matching the tool's JSON schema>,\n \"message\": <direct response users content>\n}"}, {"role": "user", "content": "\u53a6\u95e8\u5929\u6c14\u5982\u4f55\uff1f"}]}
RESPONSE: {"model":"","object":"","choices":[{"index":0,"message":{"role":"assistant","content":"```json\n{\"tool\": \"get_current_weather\", \"tool_input\": {\"location\": \"Xiamen\", \"unit\": \"celsius\"}, \"message\": \"\"}\n```","function_call":null},"finish_reason":"stop"}],"queueTime":0.0021338462829589844,"costTime":0.9370713233947754,"usage":{"prompt_token":247,"completion_token":36,"total_tokens":283}}
=======================================================================
PAYLOAD: {"model": "", "messages": [{"role": "user", "content": "\nPlease generate a natural language description based on the following question and answer.\nQuestion: [Content of the question]\nAnswer: [Content of the answer]\nGenerated Description: The result of [key phrase from the question] is [answer].\nIf necessary, you can polish the description.\nOnly output the Description, with Chinese language.\n\n\nThe question:\u53a6\u95e8\u5929\u6c14\u5982\u4f55\uff1f\nThe answer:The weather of Xiamen is cloudy, and the temperature is 35\u00b0C."}]}
RESPONSE: {"model":"","object":"","choices":[{"index":0,"message":{"role":"assistant","content":"厦门天气情况是:多云,气温35°C。","function_call":null},"finish_reason":"stop"}],"queueTime":0.008246660232543945,"costTime":0.3240656852722168,"usage":{"prompt_token":143,"completion_token":12,"total_tokens":155}}
=======================================================================
FINAL RESULT WITH FUNCTION CALL: 厦门天气情况是:多云,气温35°C。


Enter a string: 383加上135721等于多少?
PAYLOAD: {"model": "", "messages": [{"role": "system", "content": "You have access to the following tools:\n[{\"name\": \"get_current_weather\", \"description\": \"Get the current weather in a given location\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"The city e.g. Beijing\"}, \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\"]}}, \"required\": [\"location\"]}}, {\"name\": \"calculator\", \"description\": \"\\u8ba1\\u7b97\\u5668\", \"parameters\": {\"type\": \"int\", \"properties\": {\"a\": {\"type\": \"int\", \"description\": \"the first number\"}, \"b\": {\"type\": \"int\", \"description\": \"the second number\"}}, \"required\": [\"a\", \"b\"]}}]\nYou can select one of the above tools or just response user's content and respond with only a JSON object matching the following schema:\n{\n \"tool\": <name of the selected tool>,\n \"tool_input\": <parameters for the selected tool, matching the tool's JSON schema>,\n \"message\": <direct response users content>\n}"}, {"role": "user", "content": "383\u52a0\u4e0a135721\u7b49\u4e8e\u591a\u5c11\uff1f"}]}
RESPONSE: {"model":"","object":"","choices":[{"index":0,"message":{"role":"assistant","content":"```json\n{\"tool\": \"calculator\", \"tool_input\": {\"a\": 383, \"b\": 135721}, \"message\": null}\n```","function_call":null},"finish_reason":"stop"}],"queueTime":0.0021514892578125,"costTime":0.9161381721496582,"usage":{"prompt_token":252,"completion_token":35,"total_tokens":287}}
=======================================================================
PAYLOAD: {"model": "", "messages": [{"role": "user", "content": "\nPlease generate a natural language description based on the following question and answer.\nQuestion: [Content of the question]\nAnswer: [Content of the answer]\nGenerated Description: The result of [key phrase from the question] is [answer].\nIf necessary, you can polish the description.\nOnly output the Description, with Chinese language.\n\n\nThe question:383\u52a0\u4e0a135721\u7b49\u4e8e\u591a\u5c11\uff1f\nThe answer:136104"}]}
RESPONSE: {"model":"","object":"","choices":[{"index":0,"message":{"role":"assistant","content":"383加上135721等于136104。","function_call":null},"finish_reason":"stop"}],"queueTime":0.0064160823822021484,"costTime":0.28981900215148926,"usage":{"prompt_token":134,"completion_token":11,"total_tokens":145}}
=======================================================================
FINAL RESULT WITH FUNCTION CALL: 383加上135721等于136104。

在这个例子中,预置了两个函数,分别为天气查询和计算器,实验效果中进行了三轮,其中第一次属于未命中函数调用的闲聊场景,后两次分别命中了天气查询和计算器.

在实际的工作中,可能需要预置非常多函数能力,此时可能需要考虑到LLM的输入token限制,必要时需要进行模块划分,将一次LLM决策转化为多次决策,更通用一点的说法就是意图层级识别.

最后此篇关于如何实现LLM的通用function-calling能力?的文章就讲到这里了,如果你想了解更多关于如何实现LLM的通用function-calling能力?的内容请搜索CFSDN的文章或继续浏览相关文章,希望大家以后支持我的博客! 。

58 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com