/musiclm-pytorch

Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch

Primary LanguagePythonMIT LicenseMIT

MusicLM - Pytorch (wip)

Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch.

They are basically using text-conditioned AudioLM, but surprisingly with the embeddings from a text-audio contrastive learned model named MuLan. MuLan is what will be built out in this repository, with AudioLM modified from the other repository to support the music generation needs here.

Citations

@article{Mittal2021SymbolicMG,
    title   = {Symbolic Music Generation with Diffusion Models},
    author  = {Gautam Mittal and Jesse Engel and Curtis Hawthorne and Ian Simon},
    journal = {ArXiv},
    year    = {2021},
    volume  = {abs/2103.16091}
}
@article{Huang2022MuLanAJ,
    title   = {MuLan: A Joint Embedding of Music Audio and Natural Language},
    author  = {Qingqing Huang and Aren Jansen and Joonseok Lee and Ravi Ganti and Judith Yue Li and Daniel P. W. Ellis},
    journal = {ArXiv},
    year    = {2022},
    volume  = {abs/2208.12415}
}