Mentat cmd hangs when using Azure on the latest repo version
zhh210 opened this issue · 4 comments
I was able to run with Azure's openai with the released version of 0.1.19 but when I made similar changes on the nightly-build version from the repo. The cli will hang without outputting anything and I have to terminate it. The error msg doesn't show why it is hanging:
Code Context:
Directory: /home/jupyter/workspace/forked/mentat
Diff: HEAD (last commit) | 1 files | 29 lines
Included files:
mentat
└── setup.py
Auto-token limit: Model max (default)
CodeMaps: Enabled
Prompt and included files token count: 1606 / 8192
Type 'q' or use Ctrl-C to quit at any time.
What can I do for you?
>>> what
Auto-Selected Features:
119 Function/Class names
Total token count: 6640
Streaming... use control-c to interrupt the model at any point
?
^C^CTraceback (most recent call last):
File "/opt/conda/bin/mentat", line 8, in <module>
sys.exit(run_cli())
File "/home/jupyter/workspace/forked/mentat/mentat/terminal/client.py", line 239, in run_cli
terminal_client.run()
File "/home/jupyter/workspace/forked/mentat/mentat/terminal/client.py", line 176, in run
asyncio.run(self._run())
File "/opt/conda/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/jupyter/workspace/forked/mentat/mentat/terminal/client.py", line 173, in _run
await self._shutdown()
File "/home/jupyter/workspace/forked/mentat/mentat/terminal/client.py", line 159, in _shutdown
await self.session.stop()
File "/home/jupyter/workspace/forked/mentat/mentat/session.py", line 138, in stop
await self._main_task
File "/home/jupyter/workspace/forked/mentat/mentat/session.py", line 122, in run_main
await self._main()
File "/home/jupyter/workspace/forked/mentat/mentat/session.py", line 97, in _main
file_edits = await conversation.get_model_response()
File "/home/jupyter/workspace/forked/mentat/mentat/conversation.py", line 179, in get_model_response
parsedLLMResponse, time_elapsed = await self._stream_model_response(
File "/home/jupyter/workspace/forked/mentat/mentat/conversation.py", line 145, in _stream_model_response
parsedLLMResponse = await parser.stream_and_parse_llm_response(response)
File "/home/jupyter/workspace/forked/mentat/mentat/parsers/parser.py", line 112, in stream_and_parse_llm_response
for content in chunk_to_lines(chunk):
File "/home/jupyter/workspace/forked/mentat/mentat/llm_api.py", line 149, in chunk_to_lines
return chunk["choices"][0]["delta"].get("content", "").splitlines(keepends=True)
IndexError: list index out of range
I have to Ctrl + C twice to quit mentat.
Sorry you encountered this issue. Can you check if you have this issue with OpenAI's API as well? Also can you post your config?
Do you think you'd be able to let me see the fork you used to use Azure so I can see what changes you made? That would make it a lot easier for me to find what the problem is
@jakethekoenig I don't have an openAI credential but will try it out when I got one. The config is really in ~/.mentat/.env file with OPENAI_API_BASE
and OPENAI_API_KEY
set with my Azure openAI credential.
@PCSwingle sure my change was here in my forked repo: zhh210@e309c2b
It turns out that if I switch from openai.api_version = "2023-07-01-preview"
to openai.api_version = "2023-03-15-preview"
it would work. Likely an Azure acting up again. Closing the ticket now.