ricklamers/shell-ai

Feature Request: ConsoleOutput as context

Closed this issue · 2 comments

To automatically fix errors or have a kind of chat history I think it could be helpful to have the console output visible to the LLM or at least a part of it (1-2k tokens)

Good idea! But very prone to sending sensitive data to OpenAI. Would welcome a PR that puts this under a configuration option that's disabled by default.

Probably won't go with this for shell-ai, it will require being inside a terminal emulator like tmux or screen to have buffer access, I'll let other projects go down that route