Old or New openai version
Closed this issue · 2 comments
I used the code as it is for the hotpotqa.ipynb and found the following error:
APIRemovedInV1 Traceback (most recent call last)
Cell In[53], line 10
8 old_time = time.time()
9 for i in idxs[:500]:
---> 10 r, info = webthink(i, to_print=True)
11 rs.append(info['em'])
12 infos.append(info)
Cell In[47], line 26
24 for i in range(1, 8):
25 n_calls += 1
---> 26 thought_action = llm(prompt + f"Thought {i}:", stop=[f"\nObservation {i}:"])
27 try:
28 thought, action = thought_action.strip().split(f"\nAction {i}: ")
Cell In[52], line 10
9 def llm(prompt, stop=["\n"]):
---> 10 response = openai.Completion.create(
11 model="text-davinci-002",
12 prompt=prompt,
13 temperature=0,
14 max_tokens=100,
15 top_p=1,
16 frequency_penalty=0.0,
17 presence_penalty=0.0,
18 stop=stop
19 )
20 return response["choices"][0]["text"]
File c:\Users\fattoh.alqershi\ReAct\ReAct\myvenv\lib\site-packages\openai\lib_old_api.py:39, in APIRemovedInV1Proxy.call(self, *_args, **_kwargs)
38 def call(self, *_args: Any, **_kwargs: Any) -> Any:
---> 39 raise APIRemovedInV1(symbol=self._symbol)
APIRemovedInV1:
You tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run openai migrate
to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. pip install openai==0.28
It seems that error in version, when I back to version openai==0.28.
It raised other error related to the client parameters. Expected (messages, and other .....) but no messages there in the code.
Please, support me.
Thanks.
Hello, I encountered the same issue as you. Perhaps you could try running openai migrate
, referring to this link (openai/openai-python#742). Below is the code I recently implemented on Google Colab:"
import os
from openai import OpenAI
from google.colab import userdata
client = OpenAI(
api_key = userdata.get('OpenAPI'), # this is also the default, it can be omitted
)
def llm(prompt, stop=["\n"]):
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt=prompt,
temperature=0,
max_tokens=100,
top_p=1,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=stop
)
return response.choices[0].text
Close it for now but feel free to open a new one.