This repo contains a Graphical User Interface (GUI) implementation of the FluxMusic model, based on the paper Flux that plays music. It explores a simple extension of diffusion-based rectified flow Transformers for text-to-music generation.
I created a user-friendly GUI for FluxMusic using Gradio. This interface allows users to easily generate music based on text prompts without needing to interact with command-line interfaces.
-
Model Selection: Users can choose from different FluxMusic models (small, base, large, giant) via a dropdown menu.
-
Text Prompt: Enter your desired text prompt to guide the music generation.
-
Sliders and Inputs:
- Seed: Set a seed for reproducibility (0 for random).
- CFG Scale: Adjust the Classifier-Free Guidance scale (1-40).
- Steps: Set the number of diffusion steps (10-200).
- Duration: Specify the length of the generated audio in seconds (10-300).
-
File Management:
- Models Folder: Place your FluxMusic model files (
.pt
) in themodels
folder. - Generations Folder: Generated audio files are saved in the
generations
folder.
- Models Folder: Place your FluxMusic model files (
-
File Naming System: Generated files are named using the format:
[prompt]_[seed]_[model]_[counter].wav
-
Install the required dependencies:
pip install -r requirements.txt
-
Place your FluxMusic model files in the
models
folder. -
Run the GUI:
python fluxGUI.py
-
Use the interface to generate music based on your prompts and preferences.
FluxMusic comes in four sizes: Small, Base, Large, and Giant. You can download these models from the following links:
Model | Url |
---|---|
FluxMusic-Small | link |
FluxMusic-Base | link |
FluxMusic-Large | link |
FluxMusic-Giant | link |
The codebase is based on the awesome Flux and AudioLDM2 repos.