A set of APIs for various SillyTavern extensions.
You need to run the lastest version of my TavernAI fork. Grab it here: Direct link to ZIP, Git repository
All modules require at least 6 Gb of VRAM to run. With Stable Diffusion disabled, it will probably fit in 4 Gb. Alternatively, everything could also be run on the CPU.
Try on Colab (runs KoboldAI backend and TavernAI Extras server alongside):
Colab link: https://colab.research.google.com/github/Cohee1207/SillyTavern/blob/main/colab/GPU.ipynb
Alternative link (legacy, not endorsed): https://colab.research.google.com/github/Cohee1207/TavernAI-extras/blob/main/colab/GPU.ipynb
Default requirements.txt contains only basic packages for text processing
If you want to use the most advanced features (like Stable Diffusion, TTS), change that to requirements-complete.txt in commands below. See Modules section for more details.
You must specify a list of module names to be run in the
--enable-modules
command (caption
provided as an example). See Modules section.
- Open colab link
- Select desired "extra" options and start the cell
- Wait for it to finish
- Get an API URL link from colab output under the
### TavernAI Extensions LINK ###
title - Start TavernAI with extensions support: set
enableExtensions
totrue
in config.conf - Navigate to TavernAI settings and put in an API URL and tap "Connect" to load the extensions
- Install Miniconda: https://docs.conda.io/en/latest/miniconda.html
- Install git: https://git-scm.com/downloads
- Before the first run, create an environment (let's call it
extras
):
conda create -n extras
conda activate extras
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 git -c pytorch -c nvidia
git clone https://github.com/Cohee1207/TavernAI-extras
cd TavernAI-extras
pip install -r requirements.txt
- Run
python server.py --enable-modules=caption
- Get the API URL. Defaults to
http://localhost:5100
if you run locally. - Start TavernAI with extensions support: set
enableExtensions
totrue
in config.conf - Navigate to TavernAI settings and put in an API URL and tap "Connect" to load the extensions
- To run again, simply activate the environment and run the script:
conda activate extras
python server.py
- Install Python 3.10: https://www.python.org/downloads/release/python-31010/
- Install git: https://git-scm.com/downloads
- Clone the repo:
git clone https://github.com/Cohee1207/TavernAI-extras
cd TavernAI-extras
- Run
pip install -r requirements.txt
- Run
python server.py --enable-modules=caption
- Get the API URL. Defaults to
http://localhost:5100
if you run locally. - Start TavernAI with extensions support: set
enableExtensions
totrue
in config.conf - Navigate to TavernAI extensions menu and put in an API URL and tap "Connect" to load the extensions
Name | Description | Included in default requirements.txt |
---|---|---|
caption |
Image captioning | ✔️ Yes |
summarize |
Text summarization | ✔️ Yes |
classify |
Text sentiment classification | ✔️ Yes |
keywords |
Text key phrases extraction | ✔️ Yes |
prompt |
SD prompt generation from text | ✔️ Yes |
sd |
Stable Diffusion image generation | ❌ No |
GET /api/extensions
None
{"extensions":[{"metadata":{"css":"file.css","display_name":"human-friendly name","js":"file.js","requires":["module_id"]},"name":"extension_name"}]}
GET /api/script/<name>
Extension name in a route
File content
GET /api/style/<name>
Extension name in a route
File content
GET /api/asset/<name>/<asset>
Extension name and assert name in a route
File content
POST /api/caption
{ "image": "base64 encoded image" }
{ "caption": "caption of the posted image" }
POST /api/summarize
{ "text": "text to be summarize", "params": {} }
{ "summary": "summarized text" }
Name | Default value |
---|---|
temperature |
1.0 |
repetition_penalty |
1.0 |
max_length |
500 |
min_length |
200 |
length_penalty |
1.5 |
bad_words |
["\n", '"', "*", "[", "]", "{", "}", ":", "(", ")", "<", ">"] |
POST /api/classify
{ "text": "text to classify sentiment of" }
{
"classification": [
{
"label": "joy",
"score": 1.0
},
{
"label": "anger",
"score": 0.7
},
{
"label": "love",
"score": 0.6
},
{
"label": "sadness",
"score": 0.5
},
{
"label": "fear",
"score": 0.4
},
{
"label": "surprise",
"score": 0.3
}
]
}
NOTES
- Sorted by descending score order
- List of categories defined by the summarization model
- Value range from 0.0 to 1.0
POST /api/keywords
{ "text": "text to be scanned for key phrases" }
{
"keywords": [
"array of",
"extracted",
"keywords",
]
}
POST /api/prompt
{ "name": "character name (optional)", "text": "textual summary of a character" }
{ "prompts": [ "array of generated prompts" ] }
POST /api/image
{ "prompt": "prompt to be generated" }
{ "image": "base64 encoded image" }
Flag | Description |
---|---|
--enable-modules |
Required option. Provide a list of enabled modules. Expects a comma-separated list of module names. See Modules Example: --enable-modules=caption,sd |
--port |
Specify the port on which the application is hosted. Default: 5100 |
--listen |
Host the app on the local network |
--share |
Share the app on CloudFlare tunnel |
--cpu |
Run the models on the CPU instead of CUDA |
--summarization-model |
Load a custom summarization model. Expects a HuggingFace model ID. Default: Qiliang/bart-large-cnn-samsum-ChatGPT_v3 |
--classification-model |
Load a custom sentiment classification model. Expects a HuggingFace model ID. Default (6 emotions): bhadresh-savani/distilbert-base-uncased-emotion Other solid option is (28 emotions): joeddav/distilbert-base-uncased-go-emotions-student |
--captioning-model |
Load a custom captioning model. Expects a HuggingFace model ID. Default: Salesforce/blip-image-captioning-large |
--keyphrase-model |
Load a custom key phrase extraction model. Expects a HuggingFace model ID. Default: ml6team/keyphrase-extraction-distilbert-inspec |
--prompt-model |
Load a custom prompt generation model. Expects a HuggingFace model ID. Default: FredZhang7/anime-anything-promptgen-v2 |
--sd-model |
Load a custom Stable Diffusion image generation model. Expects a HuggingFace model ID. Default: ckpt/anything-v4.5-vae-swapped Must have VAE pre-baked in PyTorch format or the output will look drab! |
--sd-cpu |
Force the Stable Diffusion generation pipeline to run on the CPU. SLOW! |