gpt4 book ai didi

python - 有没有办法可以处理 Langchain QA 检索中的上下文和一般问题?

转载 作者:行者123 更新时间:2023-12-02 22:46:50 45 4
gpt4 key购买 nike

我想制作一个聊天机器人,它应该回答上下文中的问题,就我而言,是一个矢量数据库。它正在完美地做到这一点。但我也希望它能够回答矢量数据库中没有的问题。但它无法这样做。它只能从上下文中回答。

这是我为此的提示模板:

template = """Answer the question in your own words from the 
context given to you.
If questions are asked where there is no relevant context available, please answer from
what you know.

Context: {context}
Chat history: {chat_history}

Human: {question}
Assistant:"""

我的提示如下:

prompt = PromptTemplate(
input_variables=["context", "chat_history", "question"], template=template

)

为了内存,我提供了一个初始问题:

memory.save_context({"input": "Who is the founder of India?"},
{"output": "Gandhi"})

对于 QA 检索,我使用以下代码:

qa = RetrievalQA.from_chain_type(
llm=llm,
retriever=vectorstore.as_retriever(),
memory=memory,

chain_type_kwargs={'prompt': prompt}

)

但是当我问一个问题时:

question= "What did I ask about India?"
result = qa({"query": question})

对此没有任何答案。虽然这个问题是保存在聊天记录里的。它只能回答矢量数据库中的问题。我将非常感谢您的帮助。

最佳答案

下面是默认存储历史记录的代码,如果文档存储中没有答案,它将从 llm 获取结果。

    from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import ConversationalRetrievalChain,RetrievalQA
from langchain.document_loaders import TextLoader
from langchain.memory import ConversationBufferMemory
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.prompts import PromptTemplate

loader = TextLoader("fatherofnation.txt")
documents = loader.load()

template = """Answer the question in your own words from the
context given to you.
If questions are asked where there is no relevant context available, please answer from
what you know.

Context: {context}

Human: {question}
Assistant:"""

prompt = PromptTemplate(
input_variables=["context", "question"], template=template)

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)

embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")

vectorstore = Chroma.from_documents(documents, embedding_function)

llm = "your llm model here"

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

memory.save_context({"input": "Who is the founder of India?"},
{"output": "Gandhi"})

qa = RetrievalQA.from_chain_type(llm, retriever=vectorstore.as_retriever(), memory=memory,chain_type_kwargs={'prompt': prompt}
)

# question = "Who is the father of India nation?"
# result = qa({"query": question})
# print(result)

question1= "What did I ask about India?"
result1 = qa({"query": question1})
print(result1)

question1= "Tell me about google in short ?"
result1 = qa({"query": question1})
print(result1)

关于python - 有没有办法可以处理 Langchain QA 检索中的上下文和一般问题?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/77020475/

45 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com