gcui-art/suno-api

Demonstration Use Case

Omarch47 opened this issue · 4 comments

Hello, I recently came across this repo while looking for Suno API access and was very happy to find this. I wanted to use Suno to have my robots generate songs based off a prompt you speak to them. I wanted to share the process here to show some of the possibilities that this has been used for. I posted a video outlining the process here: https://youtu.be/hBqjj34e9x0

Thanks for the excellent work!

Thanks for the video, your work is so freaking awesome.

As is yours, I appreciate the kind words!

Great work!!
... you could implement a second cycle and do audio track separation for example via lala.ai.
It works great and also can be called via API. Then you could just separate vocals from rest, play back both files in sync and only use the vocal part for your lip-Sync animation as it seems to be audio-reactive to the volume. What do you think about that? Cheers and looking forward if you create a spooky robot face choir or so one day ;-)

Great work!! ... you could implement a second cycle and do audio track separation for example via lala.ai. It works great and also can be called via API. Then you could just separate vocals from rest, play back both files in sync and only use the vocal part for your lip-Sync animation as it seems to be audio-reactive to the volume. What do you think about that? Cheers and looking forward if you create a spooky robot face choir or so one day ;-)

Thanks very much! That is a very good idea. I have a separate mouth system that generates mouth sprites based on input strings so I may also try to parse the returned lyrics from this api and try to have them spoken while the music is playing. Getting them to sync well would be the biggest challenge I believe!