Diffusion repo ready for BANANA
The repo is already set up to run Stable Diffusion text-to-image model.
- Run
pip3 install -r requirements.txt
to download dependencies. - Set your Huggingface Auth Token as an environment variable
export HF_AUTH_TOKEN=your_auth_token
- Run
python3 server.py
to start the server. - Run
python3 test.py
in a different terminal session to test against it.
- Edit
app.py
to load and run your model. - Make sure to test with
test.py
! - When ready to deploy:
- add your HF_AUTH_TOKEN environment variable to the dockerfile, or contact the Banana team to set it privately as a build arg.
- edit
download.py
(or theDockerfile
itself) with scripts download your custom model weights at build time - edit
requirements.txt
with your pip packages. Don't delete the "sanic" line, as it's a banana dependency.
At this point, you have a functioning http server for your ML model. You can use it as is, or package it up with our provided Dockerfile
and deploy it to your favorite container hosting provider!
If Banana is your favorite GPU hosting provider (and we sure hope it is), read on!
Three steps:
- Create your own copy of this template repo. Either:
- Click "Fork" (creates a public repo)
- Click "Use this Template" (creates a private or public repo)
- Create your own repo and copy the template files into it
-
Install the Banana Github App to your new repo.
-
Login in to the Banana Dashboard and setup your account by saving your payment details and linking your Github.
From then onward, any pushes to the default repo branch (usually "main" or "master") trigger Banana to build and deploy your server, using the Dockerfile. Throughout the build we'll sprinkle in some secret sauce to make your server extra snappy 🔥
It'll then be deployed on our Serverless GPU cluster and callable with any of our serverside SDKs:
You can monitor buildtime and runtime logs by clicking the logs button in the model view on the Banana Dashboard](https://app.banana.dev)