prompt-toolkit/python-prompt-toolkit

Does anyone want an LLM-based autosuggester?

Opened this issue · 8 comments

I've written an AutoSuggest class which suggests prompt completions using a locally-installed or remotely-hosted large language model. It can be customized to produce different types of completions depending on the writing task (coding, fiction, documentation) and is (optionally) aware of the context in which the prompt is being written.

Is there any interest in my contributing this to the repo as a pull request?

Image

For further information, the class will introduce a number of package dependencies:

  • langchain
  • langchain_core
  • PyEnchant

I haven't done this yet, but I'm planning to try an AutoCompleter as well which will provide a pop-up menu of next tokens, ordered by their probabilities. I'm not sure how useful this will be, however.

Cool idea! But what about litellm? It's much lighter than langchain, so putting this in contrib would be nice. But wouldn't adding it cause the pip install to become slower?

Related #1913

Cool idea! But what about litellm? It's much lighter than langchain, so putting this in contrib would be nice. But wouldn't adding it cause the pip install to become slower?

The code doesn't use any of langchain's fancier features, so swapping out langchain for lightllm is easily doable.

I hadn't heard about lightllm until now. It seems to have an order of magnitude fewer github stars and forks than langchain, but is it up and coming?

Regarding pip install, I don't like adding a bunch of dependencies to a project just in order to support one of its lesser-used features. I could make this an optional feature in prompt_toolkit's pyproject.toml:

pip install .[aisuggest]

So wouldn't slow the default pip install at all.

Cool idea! But what about litellm? It's much lighter than langchain, so putting this in contrib would be nice. But wouldn't adding it cause the pip install to become slower?

The code doesn't use any of langchain's fancier features, so swapping out langchain for lightllm is easily doable.

I hadn't heard about lightllm until now. It seems to have an order of magnitude fewer github stars and forks than langchain, but is it up and coming?

Regarding pip install, I don't like adding a bunch of dependencies to a project just in order to support one of its lesser-used features. I could make this an optional feature in prompt_toolkit's pyproject.toml:

pip install .[aisuggest]

So wouldn't slow the default pip install at all.

I means litellm not lightllm

I just did a pull request: #1995 . I did have a look at litellm but it is a pretty low-level API that doesn't have the many features brought by langchain, such as support for tools, agents, and chat memory.

The size differences are not all that significant either. litellm with all its dependencies consumes 34Mb of disk. langchain and all its dependencies use 54M.

Porting the autosuggester to litellm would not be particularly difficult, and I am happy to do so if there is demand for it.

I just did a pull request: #1995 . I did have a look at litellm but it is a pretty low-level API that doesn't have the many features brought by langchain, such as support for tools, agents, and chat memory.

The size differences are not all that significant either. litellm with all its dependencies consumes 34Mb of disk. langchain and all its dependencies use 54M.

Porting the autosuggester to litellm would not be particularly difficult, and I am happy to do so if there is demand for it.

What about plain openai sdk?

I just did a pull request: #1995 . I did have a look at litellm but it is a pretty low-level API that doesn't have the many features brought by langchain, such as support for tools, agents, and chat memory.
The size differences are not all that significant either. litellm with all its dependencies consumes 34Mb of disk. langchain and all its dependencies use 54M.
Porting the autosuggester to litellm would not be particularly difficult, and I am happy to do so if there is demand for it.

What about plain openai sdk?

Correct me if I'm wrong, but wouldn't using the openai SDK lock people into using the OpenAI API? I want people to be able to swap in anthropic, local llama, gemini, and all the other alternatives out there.

I just did a pull request: #1995 . I did have a look at litellm but it is a pretty low-level API that doesn't have the many features brought by langchain, such as support for tools, agents, and chat memory.
The size differences are not all that significant either. litellm with all its dependencies consumes 34Mb of disk. langchain and all its dependencies use 54M.
Porting the autosuggester to litellm would not be particularly difficult, and I am happy to do so if there is demand for it.

What about plain openai sdk?

Correct me if I'm wrong, but wouldn't using the openai SDK lock people into using the OpenAI API? I want people to be able to swap in anthropic, local llama, gemini, and all the other alternatives out there.

It seems that all of the world support the openai-like api