/motion-instructor

MR Motion Intructor project, ETH Mixed Reality Lab 2020

Primary LanguageC#MIT LicenseMIT

MR Motion Instructor

Report | Presentation | Video

Authors: Jan Wiegner, Rudolf Varga, Felix Pfreundtner, Daniil Emtsev and Utkarsh Bajpai

Project for the course Mixed Reality Lab 2020 at ETH.

Check out the report and video linked above for more detailed information.

Abstract

Learning complex movements can be a time-consuming process, but it is necessary for the mastery of activities like Karate Kata, Yoga and dance choreographies. It is important to have a teacher to demonstrate the correct pose sequences step by step and correct errors in the student’s body postures. In-person sessions can be impractical due to epidemics or travel distance, while videos make it hard to see the 3D postures of the teacher and of the students. As an alternative, we propose the teaching of poses in Augmented Reality (AR) with a virtual teacher and 3D avatars. The focus of our project was on dancing, but it can easily be adapted for other activities.

Architecture

Our main method of pose estimation is using the Azure Kinect with the Sensor SDK and Body tracking SDK and we use the official C# wrapper to connect it to Unity. We rely on Microsoft’s Holographic Remoting Player to display content on the HoloLens 2 which is being run and rendered on a separate computer.

Learning Motion in MR

The user has an option of following a guided course, which consists of repeating basic steps to perfect them and testing their skills on choreographies. They can also use freeplay mode to beat their previous highest score.

There are a multitude of visualization options, so the user can change the environment to their own needs and accelerates the learning process. This includes creating multiple instances of his avatar and the one of the teacher, mirroring them, showing a graph of the score, a live RGB feed from the Kinect and others. Changes can be either done in the main menu or trough the hand menu for smaller changes.

Our scoring mechanism is explained in our report.

Avatars

There are 4 avatar options to choose from in our project:

  • Cube avatar (position and orientation of all estimated joints)
  • Stick figure avatar (body parts used for score calculation, changing color depending on correctness)
  • Robot avatar (rigged model)
  • SMPL avatar models (parametrized rigged model)

Environment

As of January 24, 2021, this repository has been tested under the following environment:

A dedicated CUDA compatible graphics card is necessary, NVIDIA GEFORCE GTX 1070 or better. For more information consult the official BT SDK hardware requirements. We used a GTX 1070 for development and testing.

Get Started

  1. Clone this repository.
  2. Open the unity folder as a Unity project, with Universal Windows Platform as the build platform. It might take a while to fetch all packages.
  3. Setup the Azure Kinect Libraries: (same as Sample Unity Body Tracking Application)
    1. Get the NuGet packages of libraries:
      • Open the Visual Studio Solution (.sln) associated with this project. You can create one by opening a csharp file in the Unity Editor.
      • In Visual Studio open: Tools->NuGet Package Manager-> Package Manager Console
      • Exectue in the console: Install-Package Microsoft.Azure.Kinect.BodyTracking -Version 1.0.1
    2. Move libraries to correct folders:
      • Execute the file unity/MoveLibraryFile.bat. You should now have library files in unity/ and in the newly created unity/Assets/Plugins.
  4. Open Assets/PoseteacherScene in the Unity Editor.
  5. When prompted to import TextMesh Pro, select Import TMP Essentials. You will need to reopen the scene to fix visual glitches.
  6. (Optional) Connect to the HoloLens with Holographic Remoting using the Windows XR Plugin Remoting in Unity. Otherwise the scene will only play in the editor.
  7. (Optional) In the Main object in PoseteacherScene set Self Pose Input Source to KINECT. Otherwise the input of the user is simulated from a file.
  8. Click play inside the Unity editor.

Notes

  • Most of the application logic is inside of the PoseteacherMain.cs script which is attached to the Main game object.
  • If updating from an old version of the project, be sure to delete the Library folder generated by Unity, so that packages are handled correctly.
  • The project uses MRTK 2.5.1 from the Unity Package Manager, not imported into the Assets folder:
    • MRTK assets have to be searched inside Packages in the editor.
    • The only MRTK files in Assets should be in folders MixedRealityToolkit.Generated and MRTK/Shaders.
    • Only exception is the Samples/Mixed Reality Toolkit Examples if MRTK examples are imported
    • If there are other MRTK folders, they are from an old version of the project (or were imported manually) and should be removed like when updating. Remeber to delete the Library folder after doing this.
  • We use the newer XR SDK pipeline instead of the Legacy XR pipeline (which is depreciated)

How to use

Use the UI to navigate in the application. This can also be done in the editor, consult the MRTK In-Editor Input Simulation page to see how.

For debugging we added the following keyboard shortcuts:

  • H for toggling Hand menu in training/choreography/recording (for use in editor testing)
  • O for toggling pause of teacher avatar updates
  • P for toggling pause of self avatar updates
  • U for toggling force similarity update (even if teacher updates are paused)

License

All our code and modifications are licensed under the attached MIT License.

We use some code and assets from:

Alternative pose estimation (experimental)

We show an example of using the Websockets for obtaining the pose combined with the Lightweight human pose estimation repository. If you do not have an Azure Kinect or GPU you can use this, but it will be very slow.

Clone the repository and copy alt_pose_estimation/demo_ws.py into it. Install the required packages according to the repository and run demo_ws.py. Beware that Pytorch still has issues with Python 3.8, so we recommend using Python 3.7. It should now be sending pose data over a local Websocket, which can be used if the SelfPoseInputSource value is set to WEBSOCKET for the Main object in the Unity Editor. Depending on the version of the project, some changes might need to be made in PoseInputGetter.cs to correctly setup the Websocket.