gpt4 book ai didi

Actual content send to OpenAI and Prompt Injection(实际内容发送到OpenAI和Prompt Injection)

转载 作者:bug小助手 更新时间:2023-10-25 19:20:27 24 4
gpt4 key购买 nike



from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
llm = ChatOpenAI(temperature=0.9, model=llm_model, verbose=True)

prompt = ChatPromptTemplate.from_template(
"What is the best name to describe \
a company that makes {product}?"
)
chain = LLMChain(llm=llm, prompt=prompt)

product = "Queen Size Sheet Set"
chain.run(product)

chain.prompt


  1. how do I see the actual command send to openai endpoint

  2. does the langchain use delimiter like this is the input:###{prompt}### to prevent prompt injection


更多回答
优秀答案推荐

From the program perspective that you have shared you check below two things

从你分享的程序的角度来看,检查以下两件事



  1. Details of prompt



  2. Response


    print("\nChain=> ",chain.prompt
    print("\nResponse=> ",chain.run(product))



To check the OpenAI request and response (actual content), you can use the curl command to make a POST request to the OpenAI Chat API endpoint. Here is an example of how to do it:

要检查OpenAI请求和响应(实际内容),可以使用curl命令向OpenAI chat API端点发出POST请求。下面是一个如何做到这一点的例子:


 curl --header "Content-Type:application/json" --request POST --data '{"message": "Your message here"}' https://chat.openai.com/chat

Replace "Your message here" with the message you want to send to the OpenAI model. The response will be returned in JSON format.

将“Your Message Here”替换为您想要发送到OpenAI模型的消息。响应将以JSON格式返回。


Here is an example of the response:

以下是回应的一个例子:


{
"response": "The response from the OpenAI model"
}

The response field contains the generated response from the model

响应字段包含从模型生成的响应


Prerequiste - OpenAI Key has be set as environment variable before you make a api call

Prerequiste-在进行API调用之前,OpenAI密钥已设置为环境变量


Lastly, regarding prompt injection, Langchain uses Rebuff (its in alpha phase). It's an open-source framework designed to detect and guard against PI attacks in LLM applications

最后,关于即时注入,Langchain使用Rebuff(它处于alpha阶段)。它是一个开源框架,旨在检测和防范LLM应用程序中的PI攻击


It uses several defense mechanisms:

它使用几种防御机制:



  1. Heuristics: Filters out potential malicious input before it reaches
    the LLM.

  2. LLM-based detection: Uses an LLM to scrutinize incoming
    prompts for possible attacks.

  3. VectorDB: Stores embeddings of past
    attacks, helping recognize and prevent future similar attacks.

  4. Canary tokens: Adds these tokens to prompts to catch leakages, which
    then informs the framework about the incoming prompt's embeddings
    for future prevention.


More details can be found on blog here https://blog.langchain.dev/rebuff/

有关更多详细信息,请访问博客https://blog.langchain.dev/rebuff/


更多回答

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com