gpt4 book ai didi

How can I change from OpenAI to ChatOpenAI in langchain and Flask?(如何在LangChain和Flask中从OpenAI更改为ChatOpenAI?)

转载 作者:bug小助手 更新时间:2023-10-28 13:17:04 27 4
gpt4 key购买 nike



This is an implementation based on langchain and flask and refers to an implementation to be able to stream responses from the OpenAI server in langchain to a page with javascript that can show the streamed response.

这是一个基于langchain和flASK的实现,指的是能够将来自langchain中的OpenAI服务器的响应流传输到具有可以显示流传输的响应的Java脚本的页面的实现。


I tried all ways to modify the code below to replace the langchain library from openai to chatopenai without success, i upload below both implementations (the one with openai working) and the one chatopenai with error.
thank you to all the community and those who can help me to understand the problem, it would be very helpful if you could also show me how to solve it since I have been trying for days and the error it shows has really no significance.

我尝试了所有方法来修改下面的代码,以替换从OpenAI到chat Openai的langchain库,但没有成功,我在下面上传了两种实现(OpenAI工作的那个)和出错的chat Openai。感谢所有的社区和那些能帮助我理解这个问题的人,如果你们也能告诉我如何解决这个问题,那将是非常有帮助的,因为我已经尝试了几天,但它显示的错误真的没有意义。


Code version with library that works but reports as deprecated:

可以工作但报告为已弃用的库的代码版本:


from flask import Flask, Response
import threading
import queue

from langchain.llms import OpenAI
from langchain.callbacks.base import BaseCallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

app = Flask(__name__)

@app.route('/')
def index():
return Response('''<!DOCTYPE html>
<html>
<head><title>Flask Streaming Langchain Example</title></head>
<body>
<div id="output"></div>
<script>
const outputEl = document.getElementById('output');

(async function() {
try {
const controller = new AbortController();
const signal = controller.signal;
const timeout = 120000; // Imposta il timeout su 120 secondi

setTimeout(() => controller.abort(), timeout);

const response = await fetch('/chain', {method: 'POST', signal});
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';

while (true) {
const { done, value } = await reader.read();
if (done) { break; }

const text = decoder.decode(value, {stream: true});
outputEl.innerHTML += text;
}
} catch (err) {
console.error(err);
}
})();

</script>
</body>
</html>''', mimetype='text/html')


class ThreadedGenerator:
def __init__(self):
self.queue = queue.Queue()

def __iter__(self):
return self

def __next__(self):
item = self.queue.get()
if item is StopIteration: raise item
return item

def send(self, data):
self.queue.put(data)

def close(self):
self.queue.put(StopIteration)

class ChainStreamHandler(StreamingStdOutCallbackHandler):
def __init__(self, gen):
super().__init__()
self.gen = gen

def on_llm_new_token(self, token: str, **kwargs):
self.gen.send(token)

def llm_thread(g, prompt):
try:
llm = OpenAI(
model_name="gpt-4",
verbose=True,
streaming=True,

callback_manager=BaseCallbackManager([ChainStreamHandler(g)]),
temperature=0.7,
)
llm(prompt)
finally:
g.close()


def chain(prompt):
g = ThreadedGenerator()
threading.Thread(target=llm_thread, args=(g, prompt)).start()
return g


@app.route('/chain', methods=['POST'])
def _chain():
return Response(chain("Create a poem about the meaning of life \n\n"), mimetype='text/plain')

if __name__ == '__main__':
app.run(threaded=True, debug=True)


Version with error (OpenAI replaced with ChatOpenAI)

有错误的版本(将OpenAI替换为ChatOpenAI)


from flask import Flask, Response
import threading
import queue

from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import BaseCallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

app = Flask(__name__)

@app.route('/')
def index():
return Response('''<!DOCTYPE html>
<html>
<head><title>Flask Streaming Langchain Example</title></head>
<body>
<div id="output"></div>
<script>
const outputEl = document.getElementById('output');

(async function() {
try {
const controller = new AbortController();
const signal = controller.signal;
const timeout = 120000; // Imposta il timeout su 120 secondi

setTimeout(() => controller.abort(), timeout);

const response = await fetch('/chain', {method: 'POST', signal});
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';

while (true) {
const { done, value } = await reader.read();
if (done) { break; }

const text = decoder.decode(value, {stream: true});
outputEl.innerHTML += text;
}
} catch (err) {
console.error(err);
}
})();

</script>
</body>
</html>''', mimetype='text/html')


class ThreadedGenerator:
def __init__(self):
self.queue = queue.Queue()

def __iter__(self):
return self

def __next__(self):
item = self.queue.get()
if item is StopIteration: raise item
return item

def send(self, data):
self.queue.put(data)

def close(self):
self.queue.put(StopIteration)

class ChainStreamHandler(StreamingStdOutCallbackHandler):
def __init__(self, gen):
super().__init__()
self.gen = gen

def on_llm_new_token(self, token: str, **kwargs):
self.gen.send(token)

def on_chat_model_start(self, token: str):
print("started")

def llm_thread(g, prompt):
try:
llm = ChatOpenAI(
model_name="gpt-4",
verbose=True,
streaming=True,

callback_manager=BaseCallbackManager([ChainStreamHandler(g)]),
temperature=0.7,
)
llm(prompt)
finally:
g.close()


def chain(prompt):
g = ThreadedGenerator()
threading.Thread(target=llm_thread, args=(g, prompt)).start()
return g


@app.route('/chain', methods=['POST'])
def _chain():
return Response(chain("parlami dei 5 modi di dire in inglese che gli italiani conoscono meno \n\n"), mimetype='text/plain')

if __name__ == '__main__':
app.run(threaded=True, debug=True)



Error showing the console at startup and at the time I enter the web page:

在启动和进入网页时显示控制台时出错:


Error in ChainStreamHandler.on_chat_model_start callback: ChainStreamHandler.on_chat_model_start() got an unexpected keyword argument 'run_id'
Exception in thread Thread-4 (llm_thread):
127.0.0.1 - - [09/Sep/2023 18:09:29] "POST /chain HTTP/1.1" 200 -
Traceback (most recent call last):
File "C:\Users\user22\Desktop\Work\TESTPROJ\env\Lib\site-packages\langchain\callbacks\manager.py", line 300, in _handle_event
getattr(handler, event_name)(*args, **kwargs)
File "C:\Users\user22\Desktop\Work\TESTPROJ\env\Lib\site-packages\langchain\callbacks\base.py", line 168, in on_chat_model_start
raise NotImplementedError(
NotImplementedError: StdOutCallbackHandler does not implement `on_chat_model_start`

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\user22\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Users\user22\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\user22\Desktop\Work\TESTPROJ\streamresp.py", line 90, in llm_thread
llm(prompt)
File "C:\Users\user22\Desktop\Work\TESTPROJ\env\Lib\site-packages\langchain\chat_models\base.py", line 552, in __call__
generation = self.generate(
^^^^^^^^^^^^^^
File "C:\Users\user22\Desktop\Work\TESTPROJ\env\Lib\site-packages\langchain\chat_models\base.py", line 293, in generate
run_managers = callback_manager.on_chat_model_start(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user22\Desktop\Work\TESTPROJ\env\Lib\site-packages\langchain\callbacks\manager.py", line 1112, in on_chat_model_start
_handle_event(
File "C:\Users\user22\Desktop\Work\TESTPROJ\env\Lib\site-packages\langchain\callbacks\manager.py", line 304, in _handle_event
message_strings = [get_buffer_string(m) for m in args[1]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user22\Desktop\Work\TESTPROJ\env\Lib\site-packages\langchain\callbacks\manager.py", line 304, in <listcomp>
message_strings = [get_buffer_string(m) for m in args[1]]
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user22\Desktop\Work\TESTPROJ\env\Lib\site-packages\langchain\schema\messages.py", line 52, in get_buffer_string
raise ValueError(f"Got unsupported message type: {m}")
ValueError: Got unsupported message type: p

thank you very much for the support!

非常感谢您的支持!


更多回答
优秀答案推荐

Change you function as below

更改您的功能如下


Old


def on_chat_model_start(self, token: str):
print("started")

New

新的


def on_chat_model_start(self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any):
print("started")


Thanks to python273 user on github I've resolved:

多亏了GitHub上的python273用户,我已经解决了:


import os
os.environ["OPENAI_API_KEY"] = ""

from flask import Flask, Response, request
import threading
import queue

from langchain.chat_models import ChatOpenAI
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import AIMessage, HumanMessage, SystemMessage

app = Flask(__name__)

@app.route('/')
def index():
# just for the example, html is included directly, move to .html file
return Response('''
<!DOCTYPE html>
<html>
<head><title>Flask Streaming Langchain Example</title></head>
<body>
<form id="form">
<input name="prompt" value="write a short koan story about seeing beyond"/>
<input type="submit"/>
</form>
<div id="output"></div>
<script>
const formEl = document.getElementById('form');
const outputEl = document.getElementById('output');

let aborter = new AbortController();
async function run() {
aborter.abort(); // cancel previous request
outputEl.innerText = '';
aborter = new AbortController();
const prompt = new FormData(formEl).get('prompt');
try {
const response = await fetch(
'/chain', {
signal: aborter.signal,
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({
prompt
}),
}
);
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) { break; }
const decoded = decoder.decode(value, {stream: true});
outputEl.innerText += decoded;
}
} catch (err) {
console.error(err);
}
}
run(); // run on initial prompt
formEl.addEventListener('submit', function(event) {
event.preventDefault();
run();
});
</script>
</body>
</html>
''', mimetype='text/html')

class ThreadedGenerator:
def __init__(self):
self.queue = queue.Queue()

def __iter__(self):
return self

def __next__(self):
item = self.queue.get()
if item is StopIteration: raise item
return item

def send(self, data):
self.queue.put(data)

def close(self):
self.queue.put(StopIteration)

class ChainStreamHandler(StreamingStdOutCallbackHandler):
def __init__(self, gen):
super().__init__()
self.gen = gen

def on_llm_new_token(self, token: str, **kwargs):
self.gen.send(token)

def llm_thread(g, prompt):
try:
chat = ChatOpenAI(
verbose=True,
streaming=True,
callbacks=[ChainStreamHandler(g)],
temperature=0.7,
)
chat([HumanMessage(content=prompt)])
finally:
g.close()


def chain(prompt):
g = ThreadedGenerator()
threading.Thread(target=llm_thread, args=(g, prompt)).start()
return g


@app.route('/chain', methods=['POST'])
def _chain():
return Response(chain(request.json['prompt']), mimetype='text/plain')

if __name__ == '__main__':
app.run(threaded=True, debug=True)

Link to the original reply: https://gist.github.com/python273/563177b3ad5b9f74c0f8f3299ec13850

原始回复链接:https://gist.github.com/python273/563177b3ad5b9f74c0f8f3299ec13850


更多回答

thanks for the reply but I really don't know how to implement the previously produced solution with this new implementation, honestly even deleting on_chat_model_start wouldn't change anything for me since the core of the problem is that it would still give error with or without on_chat_model_start function

谢谢你的回复,但我真的不知道如何实现这个新的实现以前产生的解决方案,老实说,即使删除on_chat_model_start对我来说也不会改变任何事情,因为问题的核心是,它仍然会给出错误或没有on_chat_model_start函数

if you remove on_chat_model_start what error you are getting.. what is the exact issue ?

如果删除ON_CHAT_MODEL_START,则会出现什么错误。确切的问题是什么?

Here you can see the error that I could not paste as a comment since it is too long for stackoverflow (this version is without the method not useful for my purpose on_chat_model_start).

在这里您可以看到错误,我不能粘贴为一个评论,因为它太长的堆栈溢出(这个版本是没有方法对我的目的无用的on_chat_Model_Start)。

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com