Custom Langchain Agent with local LLMs The code is optimize with the local LLMs for experiments. You can try with different models: Vicuna, Alpaca, gpt 4 x alpaca, gpt4-x-alpasta-30b-128g-4bit, etc. For more information, please check this link.
The code only requires the oobabooga/text-generation-webui.For the installation instruction, please follow this.
First, start the oobabooga server. Then you can run the LLM agent in the notebook file.
python server.py --model your_model_name --listen --api