This code implements the paper Learning Time-Critical Responses for Interactive Character Control. This system implements teacher-student framework to learn time-critically responsive policies, which guarantee the time-to-completion between user inputs and their associated responses regardless of the size and composition of the motion databases. This code is written in java and Python, based on Tensorflow2.
Kyungho Lee, Sehee Min, Sunmin Lee, and Jehee Lee. 2021. Learning Time-Critical Responses for Interactive Character Control. ACM Trans. Graph. 40, 4, 147. (SIGGRAPH 2021)
Project page: http://mrl.snu.ac.kr/research/ProjectAgile/Agile.html
Paper: http://mrl.snu.ac.kr/research/ProjectAgile/AGILE_2021_SIGGRAPH_author.pdf
Youtube: https://www.youtube.com/watch?v=rQKuvxg5ZHc
This code is implemented with Java and Python, and was developed using Eclipse on Windows. A Windows 64-bit environment is required to run the code.
Install JDK 1.8
Java SE Development Kit 8 Downloads
Install Eclipse
Install Eclipse IDE for Java Developers
Install Python 3.6
https://www.python.org/downloads/release/python-368/
Install pydev to Eclipse
https://www.pydev.org/download.html
Install cuda and cudnn 10.0
Install Visual C++ Redistributable for VS2012
Laplacian Motion Editing(PmQmJNI.dll
) is implemented in C++, and VS2012 is required to run it.
Visual C++ Redistributable for Visual Studio 2012 Update 4
Install JEP(Java Embedded Python)
This library requires a part of the Visual Studio installation.
I don't know exactly which ones are needed, but I'm guessing .net framework 3.5
, VC++ 2015.3 v14.00(v140)
. Installing Visual Studio 2017 or later may be helpful.
Install Tensoflow 1.14.0
pip install tensorflow-gpu==1.14.0
We recommend downloading through Git in Eclipse environment.
- Open Git Perspective in Elcipse
- Paste repository url and clone repository ( 'https://git.ncsoft.net/scm/private_khlee/private-khlee-test.git' )
- Select all projects in
Working Tree
- Right click and select
Import Projects
, andImport existing Eclipse projects
.
Or you can just download the repository as Zip file and extract it, and import it using File->Import->General->Existing Projects into Workspace
in Eclipse.
This code uses Interactive Character Animation by Learning Multi-Objective Control for learning the student policy.
Download required third pary library files(ThirdPartyDlls.zip) and extract it to mrl.motion.critical
folder.
The entire data used in the paper cannot be published due to copyright issues. This repository contains only minimal motion dataset for algorithm validation. SNU Motion Database was used for martial arts movements, CMU Motion Database was used for locomotion.
All of the instructions below are assumed to be executed based on Eclipse. Executable java files are grouped in package mrl.motion.critical.run
of project mrl.motion.critical
.
- You can directly open source file with
Ctrl+Shift+R
- You can run the currently open source file with
Ctrl+F11
. - You can configure program arguments in
Run->Run Configurations
menu.
You can see the pre-trained network by running RuntimeMartialArtsControlModule.java
.
Pre-trained network file is located at mrl.python.neural\train\martial_arts_sp_da
1, 2
: walk, run3,4,5,6
: martial arts actionsq,w,e,r,t
: control critical response time
- Data Annotation & Configuration
- You can check motion data list and annotation information by executing
MAnnotationRun.java
.
- You can check motion data list and annotation information by executing
- Model Configuration
- Action list, critical response time of each action, user input model and error metric is defined at
MartialArtsConfig.java
- Action list, critical response time of each action, user input model and error metric is defined at
- Preprocessing
- You can precompute data table for pruning by executing
DP_Preprocessing.java
- The data file will be located at
mrl.motion.critical\output\dp_cache
- You can precompute data table for pruning by executing
- Training teacher policy
- You can train teacher policy by executing
LearningTeacherPolicy.java
- The result will be located at
mrl.motion.critical\train_rl
- You can train teacher policy by executing
- Training data for student policy
- You can generate training data for student policy by executing
StudentPolicyDataGeneration.java
- The result will be located at
mrl.python.neural\train
- You can generate training data for student policy by executing
- Training student policy
- You can train student policy by executing
mrl.python.neural\train_rl.py
- You need to set program arguments in
Run->Run Configurations
menu.- arguments format : <new/load>
- ex)
martial_arts_sp new 0.0001
- You can train student policy by executing
- Running student policy
- You can see the trained student policy by running
RuntimeMartialArtsControlModule.java
. - This class will be load student policy located at
mrl.python.neural\train
.
- You can see the trained student policy by running