Pinned Repositories
banana-cli
banana-node-sdk
banana-python-sdk
demo-mistral-7b-instruct-v0.1
This is a demo-mistral-7b-instruct-v0.1 model starter template from Banana.dev that allows on-demand serverless GPU inference.
fructose
potassium
An HTTP serving framework by Banana
serverless-template
serverless-template-dreambooth-inference
serverless-template-gptj
serverless-template-stable-diffusion
Banana's Repositories
bananaml/serverless-template-stable-diffusion
bananaml/serverless-template
bananaml/serverless-template-gptj
bananaml/demo-sd-hf-safetensors
This is a Stable Diffusion starter template from Banana.dev that allows on-demand serverless GPU inference of a custom safetensors model from Hugging Face. Basically your own Stable Diffusion API.
bananaml/serverless-template-dreambooth-inference
bananaml/demo-automatic1111-sd-webui
bananaml/demo-whisper
This is a Whisper transcription starter template from Banana.dev that allows on-demand serverless GPU inference of the openai/whisper-base model from Hugging Face. Basically your own Whisper API.
bananaml/t5-serverless
bananaml/banana-go
The Go SDK for Banana
bananaml/banana-rust-sdk
Rust SDK for calling the Banana API.
bananaml/clip-serverless-template
bananaml/demo-tinystories
This is a Tinystories starter template from Banana.dev that allows on-demand serverless GPU inference of the roneneldan/TinyStories-33M model from Hugging Face. An API for telling bedtime stories.
bananaml/demo-vicuna-33b
This is a Vicuna-33B model starter template from Banana.dev that allows on-demand serverless GPU inference.
bananaml/serverless-template-stable-diffusion-v2
bananaml/serverless-template-gpt
bananaml/auto-tensorRT
bananaml/berry-fast
bananaml/demo-bert
This is a BERT large language model starter template from Banana.dev that allows on-demand serverless GPU inference.
bananaml/demo-clip
bananaml/demo-dolphin-llama2-7b-gptq
This is a Dolphin-Llama2-7B-GPTQ model starter template from Banana.dev that allows on-demand serverless GPU inference.
bananaml/demo-firefly-llama2-13b-v1.2
This is a Firefly-Llama2-13B-v1.2-GPTQ model starter template from Banana.dev that allows on-demand serverless GPU inference.
bananaml/demo-mythomix-l2-13b-gptq
This is a MythoMix-L2-13B-GPTQ model starter template from Banana.dev that allows on-demand serverless GPU inference.
bananaml/demo-vicuna-13b
This is a Vicuna-13B model starter template from Banana.dev that allows on-demand serverless GPU inference.
bananaml/demo-vicuna-7b
This is a Vicuna-7B model starter template from Banana.dev that allows on-demand serverless GPU inference.
bananaml/demo-wizardlm-1.0-uncensored-llama2-13b-gptq
This is a WizardLM-1.0-Uncensored-Llama2-13B-GPTQ model starter template from Banana.dev that allows on-demand serverless GPU inference.
bananaml/gptj-serverless-template
bananaml/workshop_demo_storygen