from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
llm = ChatOpenAI(temperature=0.9, model=llm_model, verbose=True)
prompt = ChatPromptTemplate.from_template(
"What is the best name to describe \
a company that makes {product}?"
)
chain = LLMChain(llm=llm, prompt=prompt)
product = "Queen Size Sheet Set"
chain.run(product)
chain.prompt
- how do I see the actual command send to openai endpoint
- does the langchain use delimiter like
this is the input:###{prompt}###
to prevent prompt injection
更多回答
优秀答案推荐
From the program perspective that you have shared you check below two things
从你分享的程序的角度来看,检查以下两件事
Details of prompt
Response
print("\nChain=> ",chain.prompt
print("\nResponse=> ",chain.run(product))
To check the OpenAI request and response (actual content), you can use the curl command to make a POST request to the OpenAI Chat API endpoint. Here is an example of how to do it:
要检查OpenAI请求和响应(实际内容),可以使用curl命令向OpenAI chat API端点发出POST请求。下面是一个如何做到这一点的例子:
curl --header "Content-Type:application/json" --request POST --data '{"message": "Your message here"}' https://chat.openai.com/chat
Replace "Your message here" with the message you want to send to the OpenAI model. The response will be returned in JSON format.
将“Your Message Here”替换为您想要发送到OpenAI模型的消息。响应将以JSON格式返回。
Here is an example of the response:
以下是回应的一个例子:
{
"response": "The response from the OpenAI model"
}
The response field contains the generated response from the model
响应字段包含从模型生成的响应
Prerequiste - OpenAI Key has be set as environment variable before you make a api call
Prerequiste-在进行API调用之前,OpenAI密钥已设置为环境变量
Lastly, regarding prompt injection, Langchain uses Rebuff (its in alpha phase). It's an open-source framework designed to detect and guard against PI attacks in LLM applications
最后,关于即时注入,Langchain使用Rebuff(它处于alpha阶段)。它是一个开源框架,旨在检测和防范LLM应用程序中的PI攻击
It uses several defense mechanisms:
它使用几种防御机制:
- Heuristics: Filters out potential malicious input before it reaches
the LLM.
- LLM-based detection: Uses an LLM to scrutinize incoming
prompts for possible attacks.
- VectorDB: Stores embeddings of past
attacks, helping recognize and prevent future similar attacks.
- Canary tokens: Adds these tokens to prompts to catch leakages, which
then informs the framework about the incoming prompt's embeddings
for future prevention.
More details can be found on blog here https://blog.langchain.dev/rebuff/
有关更多详细信息,请访问博客https://blog.langchain.dev/rebuff/
更多回答
我是一名优秀的程序员,十分优秀!