在langchain的openai对象里报错,疑似不兼容
Closed this issue · 1 comments
tclxdean-lu commented
运行如下代码:
import openai
openai.api_key = "token1"
#chatglm api
openai.api_base = "http://127.0.0.1:8080/v1"
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key="token1")
print(llm("详细介绍一下chatgpt"))
报错如下:
C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\Scripts\python.exe C:\Users\Dean\Desktop\projects\ChatGLM-6B\test_openai.py
Could not import azure.core python package.
Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Invalid response object from API: '{"detail":[{"loc":["body","messages"],"msg":"field required","type":"value_error.missing"}]}' (HTTP response code was 422).
Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Invalid response object from API: '{"detail":[{"loc":["body","messages"],"msg":"field required","type":"value_error.missing"}]}' (HTTP response code was 422).
Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Invalid response object from API: '{"detail":[{"loc":["body","messages"],"msg":"field required","type":"value_error.missing"}]}' (HTTP response code was 422).
Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised APIError: Invalid response object from API: '{"detail":[{"loc":["body","messages"],"msg":"field required","type":"value_error.missing"}]}' (HTTP response code was 422).
Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 10.0 seconds as it raised APIError: Invalid response object from API: '{"detail":[{"loc":["body","messages"],"msg":"field required","type":"value_error.missing"}]}' (HTTP response code was 422).
Traceback (most recent call last):
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\openai\api_requestor.py", line 335, in handle_error_response
error_data = resp["error"]
KeyError: 'error'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\test_openai.py", line 13, in <module>
print(llm("详细介绍一下chatgpt"))
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\langchain\llms\base.py", line 297, in __call__
self.generate([prompt], stop=stop, callbacks=callbacks)
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\langchain\llms\base.py", line 191, in generate
raise e
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\langchain\llms\base.py", line 185, in generate
self._generate(prompts, stop=stop, run_manager=run_manager)
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\langchain\llms\openai.py", line 314, in _generate
response = completion_with_retry(self, prompt=_prompts, **params)
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\langchain\llms\openai.py", line 106, in completion_with_retry
return _completion_with_retry(**kwargs)
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\tenacity\__init__.py", line 325, in iter
raise retry_exc.reraise()
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\tenacity\__init__.py", line 158, in reraise
raise self.last_attempt.result()
File "C:\Users\Dean\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\_base.py", line 439, in result
return self.__get_result()
File "C:\Users\Dean\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\_base.py", line 391, in __get_result
raise self._exception
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\langchain\llms\openai.py", line 104, in _completion_with_retry
return llm.client.create(**kwargs)
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\openai\api_resources\completion.py", line 25, in create
return super().create(*args, **kwargs)
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\openai\api_requestor.py", line 230, in request
resp, got_stream = self._interpret_response(result, stream)
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\openai\api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\openai\api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
File "C:\Users\Dean\Desktop\projects\ChatGLM-6B\venv\lib\site-packages\openai\api_requestor.py", line 337, in handle_error_response
raise error.APIError(
openai.error.APIError: Invalid response object from API: '{"detail":[{"loc":["body","messages"],"msg":"field required","type":"value_error.missing"}]}' (HTTP response code was 422)
Process finished with exit code 1
ninehills commented
这是因为本项目实现的是 Chat completions 接口(对应 gpt-3.5-turbo 模型),而 Langchain 的 OpenAI 默认使用的是 text-davinci-003 模型。
请使用 langchain.llms.OpenAIChat 或者 langchain.chat_models.ChatOpenAI 模型。示例代码:
Python 3.11.2 (main, Feb 16 2023, 02:51:42) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from langchain.llms import OpenAIChat
>>> llm = OpenAIChat(openai_api_key="token1", openai_api_base="https://6d7c-34-143-203-118.jp.ngrok.io/v1")
/opt/homebrew/lib/python3.11/site-packages/langchain/llms/openai.py:695: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
>>> print(llm("详细介绍一下chatgpt"))
ChatGPT 是由 OpenAI 于 2022 年 11 月推出的一个人工智能聊天机器人程序,该程序基于大型语言模型 GPT-3.5,使用指令微调(Instruction Tuning)和基于人类反馈的强化学习技术(RLHF)训练而成。ChatGPT 可以识别文本中的语法和语义,并与其他人类用户进行对话。
>>>