This is the code for this this video on Youtube by Siraj Raval as part of the #InMyFeelingsChallenge dance competition. The challenge is to create your own AI to dance to this song and submit it via Twitter, Facebook, Youtube, Instagram, or LinkedIn (or all of them) using the #InMyFeelingsChallenge hashtag. There are 3 methods to do this
- Run the real-time pose detection model in your browser.
- Hold up your phone or another screen to the webcam, while a video of a human dancing plays.
- Record your screen while the real-time pose estimate follows the human dance.
- In Final Cut Pro, or a video editing program of your choice, apply a color mask so all colors except the color of the pose estimate model are made not visible.
- Export the video and upload!
- Modify the code in this repository so instead of the demo applying pose estimation to webcam video, it applies it to a video on your desktop, records, and saves it.
- Use a javascript library like chroma.js to apply a color mask programmatically to the video, making all colors except for the pose estimate model color not visible.
- Upload the final result!
- Train an LSTM Neural network on a dataset of Shiggy dance videos similar to what carykh did for trancey dance videos.
- Upload the result!
I'll definitely give a social media shoutout to some of the best submissions! Good luck Wizards, lets light up this challenge and show the world what AI can do.
Run real-time pose estimation in the browser using TensorFlow.js.
PoseNet can be used to estimate either a single pose or multiple poses, meaning there is a version of the algorithm that can detect only one person in an image/video and one version that can detect multiple persons in an image/video.
This is a Pure Javascript implementation of PoseNet. Thank you TensorFlow.js for your flexible and intuitive APIs.
Refer to this blog post for a high-level description of PoseNet running on Tensorflow.js.
Credits for this code go to the Tensorflow team at Google