This is a bark starter template from Banana.dev that allows on-demand serverless GPU inference.
You can fork this repository and deploy it on Banana as is, or customize it based on your own needs.
- Fork this repository to your own GitHub account.
- Connect your GitHub account on Banana.
- Create a new model on Banana from the forked GitHub repository.
- Wait for the model to build after creating it.
- Add your S3 bucket credentials in
app.py
. - Make an API request using one of the provided snippets in your Banana dashboard. However, instead of sending a prompt as provided in the snippet, fit the prompt to the needs of the bark model:
inputs = {
"prompt": "Let's try generating speech, with Bark, a text-to-speech model"
}
For more info, check out the Banana.dev docs.