haseeb-heaven/code-interpreter

Offline-Model via LM Studio does not work as expected / local-model confusion

toovy opened this issue · 2 comments

Hi,
the readme says, this project would work with a local LM Studio server. The docs says, you have to edit the config/offline-model.config, which does not exist, but the file with the similar purpose is 'config/local-model.config', at least I guess it's meant like that. When trying to run 'python interpreter.py -md 'code' -m 'local-model' -dc' it will ask for the .env with keys for the remote LMs. Creating an empty .env or and .env with empty keys does not change anything. Looks like a bug to me.
Would be great to use this project without using proprietary models.
Thanks, BR Tobias

Okay yes it assumes you already have .env files from previously using sessions of online models,
will work on this bug asap.
Thanks

Fixed the bug in the latest PR