/demo-bark

This is a bark model starter template from Banana.dev that allows on-demand serverless GPU inference.

Primary LanguagePython

Banana.dev bark starter template

This is a bark starter template from Banana.dev that allows on-demand serverless GPU inference.

You can fork this repository and deploy it on Banana as is, or customize it based on your own needs.

Running this app

Deploying on Banana.dev

  1. Fork this repository to your own GitHub account.
  2. Connect your GitHub account on Banana.
  3. Create a new model on Banana from the forked GitHub repository.

Running after deploying

  1. Wait for the model to build after creating it.
  2. Add your S3 bucket credentials in app.py.
  3. Make an API request using one of the provided snippets in your Banana dashboard. However, instead of sending a prompt as provided in the snippet, fit the prompt to the needs of the bark model:
inputs = {
    "prompt": "Let's try generating speech, with Bark, a text-to-speech model"
}

For more info, check out the Banana.dev docs.