This is a simple project with aim of using Rasa(Opensource), LLM API together with browser interface. Basically aim is to build a tool to help users create a story together with AI and a chatbot to help guide along the way.
It uses:
- RASA opensource as chatbot framework.
- LLMs in form of Huggingface Inference API.
Setting up:
- Clone the repo to local.
- Create a new python environment.
- Install rasa.
pip install rasa
Getting Huggingface API credentials:
- Getting text-completion LLM:
- Find any text completion model Huggingface that provides Serverless Inference API.
- e.g., mistralai/Mistral-7B-v0.1
- In the
Deploy
option of the model,Inference API(Serverless)
will be present. - Click on it and get
API_URL
.
- Find any text completion model Huggingface that provides Serverless Inference API.
- Get Huggingface Access Token:
- Settings >> Access Tokens >> New Token (Read Access is enough)
Training the Rasa chatbot:
- Activate python environment.
- Train the model:
rasa train --domain domain
Run the project:
- Activate python environment.
- Set the Huggingface credentials as environment variables:
set LLM_API_KEY=your_api_key
set LLM_API_URL=api_url
- Run rasa core server:
make -f Makefile run-server-debug
- In another terminal run rasa action server:
rasa run actions --debug
- Open the
index.html
file, thats inside folderfront_end
, in some browser.
- Say "hi" in message box.
- Or directly write some line in the center blank page and hit "Continue Story..."