/Learned-Motion-Matching

A neural-network-based generative model for video-game characters animations

Primary LanguagePython

Learned Motion Matching

LMM2

A neural-network-based generative model for character animation.

The system takes user controls as input to automatically produce high quality motions that achieves the desired target. Implemented using Pytorch.

Following Ubisoft La-Forge paper.

How it works

---------

Currently, this project can be separated in two parts:

  • Unity: Extract all character animations information and store in three files: XData.txt, YData.txt and HierarchyData.txt;
  • Pytorch: Using above generated datas, neural network models are trained.

After training, .onnx files are generated and exported to Unity, where the neural nets inference can be run using Barracuda.

XData.txt

This file consist of C blocks, F lines and M columns. C is the number of clips; Fi is the number of frames of clip i; M is the number of features (Described here, section 3: BASIC MOTION MATCHING). Each block C is separated by a empty line.

Let's consider the following animation database:

C = 2, F[0] = 3, F[1] = 4, M = 24

XData.txt should be in this format (illustrative values):

-8.170939E-08 0 0 -1.634188E-07 0 0 -2.451281E-07 0 0 0 -3.773226E-05 0 1.117587E-10 4.470348E-11 -0.001392171 0 0 0 -6.705522E-11 3.352761E-11 -0.001392171 0 0 0
-8.579486E-08 0 0 -1.675043E-07 0 0 -2.492136E-07 0 0 0 -3.773226E-05 0 1.117587E-10 4.470348E-11 -0.001392171 0 0 0 -6.705522E-11 3.352761E-11 -0.001392171 0 0 0
-8.988033E-08 0 0 -1.715897E-07 0 0 -2.532991E-07 0 0 0 -4.085493E-05 0 1.117587E-10 4.470348E-11 -0.001392171 0 0 0 -6.705522E-11 3.352761E-11 -0.001392171 0 0 0

-8.170939E-08 0 0 -1.634188E-07 0 0 -2.451281E-07 0 0 0 -3.773226E-05 0 1.117587E-10 4.470348E-11 -0.001392171 0 0 0 -6.705522E-11 3.352761E-11 -0.001392171 0 0 0
-8.579486E-08 0 0 -1.675043E-07 0 0 -2.492136E-07 0 0 0 -3.773226E-05 0 1.117587E-10 4.470348E-11 -0.001392171 0 0 0 -6.705522E-11 3.352761E-11 -0.001392171 0 0 0
-8.988033E-08 0 0 -1.715897E-07 0 0 -2.532991E-07 0 0 0 -4.085493E-05 0 1.117587E-10 4.470348E-11 -0.001392171 0 0 0 -6.705522E-11 3.352761E-11 -0.001392171 0 0 0
-8.988033E-08 0 0 -1.715897E-07 0 0 -2.532991E-07 0 0 0 -4.085493E-05 0 1.117587E-10 4.470348E-11 -0.001392171 0 0 0 -6.705522E-11 3.352761E-11 -0.001392171 0 0 0

YData.txt

Similar to XData, but M is the number of pose information

HierarchyData.txt

This file stores the character hierarchy to generate Forward Kinematcs for Pytorch usage. Consists of N lines, the number of joints of our character. Each line refers to it parent joint, except the root, which is 0.

Let's consider the following rig hierarchy:

       root
         |
      spine_01
        / \ 
  leg_l    leg_r

HierarchyData.txt should be:

0
0
1
1

Installation

---------
  • Download the Source code from the latest tag here
  • Download the Unity sample project from the latest tag here.

* Install the Barracuda package inside of Unity’s Package Manager (Window->Package Manager)

Usage

---------

Currently, for use this system, the user needs to do the following steps:

Unity

  1. Add the desired animation clips in the character Animator tab;
  2. Add and setup the Gameplay script to the desired character;
  3. Hit the "Extract data from animator" button, located the Inspector of Gameplay script;
  4. Export "XData", "YData" and "HierarchyData" previously generated to Pytorch "/database" folder;

Pytorch

  1. Run decompressor.py, followed by stepper.py and projector.py (this last two can be run in parallel);
  2. Export the ONNX files generated in Pytorch environment to Unity's "/Assets/Motion Matching/ONNX " folder;
  3. Export the "QData.txt", "YtxyData.txt" and "ZData.txt" file generated in Pytorch environment to Unity's "/Assets/Motion Matching/Database" folder;

Unity

  1. Hit "Play" button and play.

githubimg1

Important notes

--------- If you try to use it with your own character and animations, there are some details:
  • All your character's bones scales must be (1, 1, 1) to ForwardKinematics method works properly;
  • Key all the bones (with Location, Quaternion and Scale info);
  • Every animation clip must have at least 60 frames;
  • The last 60 frames of every animation clip must have the same trajectory directions, because as input to the neural networks, are passed the future 60 frames.