Add an install script/wizard?
Opened this issue · 3 comments
Hey I just gotta say I went from helix->vscode for the sole purpose of using copilot. But I'm now moving back to helix, which I'm stoked about, this repo is brilliant!
I just want to prefix this by saying that I'm a moron, and this is intended as a rant about my experiences as a moron installing and configuring this tool. This is not a reflection of how good/bad your tool is once it's all up and running.
I found the installation/configuration process a little disjointed and I made a bunch of mistakes and it ended up taking me around a couple of hours to get anything working. I think that while this whole project seems to be super modular which is great from a software-engineering perspective, it ends up being unnecessarily complicated to install/configure.
What are your thoughts on adding a little configuration/install tool. Something like a command line install/configuration wizard.
I could see it looking something like this;
$ cargo install llmvm
$ llmvm --quickstart
# Welcome to the llmvm quickstart, which components would you like to install
[x] Code assistant
[ ] CLI chat
# Which code-assistant model would you like to use
[x] OpenAI: chatgpt3.5 (default)
[ ] OpenAI: chatgpt4
[ ] Lamma
What is your OpenAI key: asdfljasdf;lksadhf
# Perfect your all setup, welcom to llmvm :)
All the other management can happen behind the scenes.
Once I fully workout all the other ins and out's I'll likely open a PR improving some of the docs, and might even build the above tool I have a spare hour or so. There are still some things that are entirely clear to me. Like how would I choose chatgpt-4-turbo rather than 3.5? I tried setting that in the configs, but couldn't getting the naming right and only succeeded in crashing the LSP.
Anyway, food for thought. I'm super grateful for all your hard work here! I think that this repo is highly underrated.
Hey @silvergasp , thanks for the valuable feedback. You're right, the modular structure of the project does complicate setup to some degree, which unintentionally makes it more of a "hacker's tool" rather than something UX friendly/accessible. An install wizard would be great!
A couple ideas on how we could approach this:
- Make the wizard a separate crate, something like
llmvm-setup
orllmvm-wizard
- Include the wizard in
llmvm-core
, which can be invoked viallmvm-core setup
orllmvm-core wizard
I'm thinking the second option might be better since the user would not have to install a separate crate, but I'm curious to hear your thoughts on this.
On a sidenote, I'm also thinking it might be a good idea to have a config
directive for each crate, so that the user can edit settings without having to edit the toml config. i.e. you could run llmvm-outsource config openai_api_key <your api key here>
to change the OpenAI key. Very much similar to the git config
interface. I'll probably make a separate issue for that.
Like how would I choose chatgpt-4-turbo rather than 3.5? I tried setting that in the configs, but couldn't getting the naming right and only succeeded in crashing the LSP.
I have a pre-defined preset for GPT-4 here, although I haven't tested it since I'm not a paying OpenAI customer: https://github.com/DJAndries/llmvm/blob/master/core-lib/presets/gpt-4-codegen.toml
You can try setting the default_preset
in your codeassist config to gpt-4-codegen
.
If it still crashes, do you see any logs in your logging directory?
You can also make a test request by invoking the core directly, just to see if gpt-4 works: llmvm-core generate --model 'outsource/openai-chat/gpt-4' --prompt 'Hello GPT!' --max-tokens 2048
Hey @silvergasp , thanks for the valuable feedback. You're right, the modular structure of the project does complicate setup to some degree, which unintentionally makes it more of a "hacker's tool" rather than something UX friendly/accessible. An install wizard would be great!
Yeah hopefully it'll help with on-boarding. It's a lot more inviting to try something that "just-works" out of the box with a minimal configuration, and then delve into the details later. I'm actually midway, through wiping my laptop and re-installing everything, so I'll likely go through the install process and build the wizard as I go this afternoon.
Include the wizard in llmvm-core, which can be invoked via llmvm-core setup or llmvm-core wizard
Yeah I think I like this option the best.
On a sidenote, I'm also thinking it might be a good idea to have a config directive for each crate, so that the user can edit settings without having to edit the toml config. i.e. you could run llmvm-outsource config openai_api_key to change the OpenAI key. Very much similar to the git config interface. I'll probably make a separate issue for that.
Yeah I think that would be a great improvement. I know that loading the API key from the env is a popular way of doing things with openai .e.g. in OPENAI_API_KEY
. I also think that this is a better approach in general, or at least having some separation between the "secrets" and the rest of the config.
Also on the topic of API key's that was one of the little mistakes that I made when I did the last install. I wrongly assumed that the api_key
in the configs was referencing the openai_api_key. Which I later found was incorrect when I went through the outsource docs more carefully. What are your thoughts on renaming api_key
-> llmvm_api_key
, just to disambiguate the naming.
I have a pre-defined preset for GPT-4 here, although I haven't tested it since I'm not a paying OpenAI customer: https://github.com/DJAndries/llmvm/blob/master/core-lib/presets/gpt-4-codegen.toml
I'll give this a go this afternoon, when I'm reinstalling and see how I go.