An AI human motion prediction algorithm
The main inspiration for developing this algorithm is exoskeleton transparency control, which aims at achieving synchronization and synergy between the motions of the exoskeleton robot and the human user. By being able to predict future motions from a previous time sequence, HuMAn can provide anticipation to a chosen control strategy.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
What things you need to install the software and how to install them.
This algorithm is programmed using Python, currently using version 3.8. Installing Python through Anaconda 🐍 is recommended because:
- You gain access to Conda packages, apart from Pip packages;
- Conda is a great tool for managing virtual environments (you can create one to install all the prerequisites for HuMAn)!
Other key dependencies are (version numbers are kept for reference, but newer versions may work):
-
TensorFlow (version 2.4)
-
pip install tensorflow
-
-
NVIDIA CUDA Toolkit (version 11.0)
- This is not mandatory, but highly recommended! An available NVIDIA GPU can speed up TensorFlow code to a great extent, when compared to running solely on CPU;
- You can install it with Conda, enabling different versions of the toolkit to be installed in other virtual environments, or use the official installer from the NVIDIA website;
- Ensure to pair TensorFlow and CUDA versions correctly (see this).
-
conda install cudatoolkit
-
STAR model (more about it below)
- The authors of the STAR body model provide loaders based upon Chumpy, PyTorch and TensorFlow. I created a fork of their repository, to make pointing to the model (.npz files) directory easier and more flexible. You can install it using pip.
-
pip install git+https://github.com/Vtn21/STAR
-
Trimesh (version 3.9.1)
- Used solely for visualizing AMASS recordings as body meshes, being thus not mandatory.
-
conda install -c conda-forge trimesh
HuMAn uses the AMASS human motion database. Its data is publicly available, requiring only a simple account. The whole database (after uncompressed) has around 23 GB of NumPy npz files, corresponding to more than 45 hours of recordings. Keep it in a directory of your choice.
AMASS data can be visualized using a series of body models, such as SMPL, SMPL-H (this comprises hand motions), SMPL-X (SMPL eXpressive, with facial expressions), or the more recent STAR. HuMAn uses the STAR model as it has fewer parameters than its predecessors, while exhibiting more realistic shape deformations. You can download the models from their webpages, creating an account as done for AMASS.
Please note that the body models are used here just for visualization, and do not interfere in training. Thus, it is easy to incorporate the other models for this purpose.
Update the folder paths in the scripts as required. The example folder structure is given as follows:
.
├── ...
├── AMASS
│ ├── datasets # Folder for all AMASS sub-datasets
| | ├── ACCAD # A sub-dataset from AMASS
| | | ├── Female1General_c3d # Sub-folders for each subject
| | | | ├── A1 - Stand_poses.npz # Each recording is a npz file
| | | | └── ...
| | | └── ...
| | ├── BMLhandball # Another sub-dataset (same structure)
| | | ├── S01_Expert # Subject sub-folder
| | | └── ...
| | └── ...
| └── models # Folder for STAR model (and maybe others)
| └── star # The downloaded model
| ├── female.npz
| ├── male.npz
| └── neutral.npz
├── HuMAn # This repository
| └── ...
└── ...
This repository is compatible with pip. Thus, the easiest way to use it is by cloning it to a directory of your choice, then installing it as a pip package, enabling importing it inside your scripts.
git clone https://github.com/Vtn21/HuMAn
cd HuMAn
pip install -e .
After downloading and uncompressing the AMASS dataset, start by using the preprocessing script inside the scripts folder. Tweak it as necessary to create the TFRecords files, that will provide the algorithm with training and validation data.
After that, use the training script to train your model. It automatically saves training results. Call it with command-line args, "d" for "dataset" and "p" for "procedure". The "train" procedure trains the model from scratch, while "transfer" loads the previously trained universal model and fine-tunes it according to the selected dataset.
python train.py -d=universal -p=train
python train.py -d=bmlhandball -p=train
python train.py -d=bmlhandball -p=transfer
python train.py -d=mpihdm05 -p=train
python train.py -d=mpihdm05 -p=transfer
The evaluation folder contains several scripts for evaluating the trained the model from a series of different metrics. It also contains the npz2mat script, to convert npz to mat files and plot results using MATLAB. This is just personal preference, it is completely possible to use Matplotlib or other library for that purpose.
- Fork the repo
- Check out a new branch based and name it to what you intend to do:
-
git checkout -b BRANCH_NAME
-
- Commit your changes
- Please provide a git message that explains what you've done;
- Commit to the forked repository.
git commit -m "A short and relevant message"
- Push to the branch
-
git push origin BRANCH_NAME
-
- Make a pull request!
Victor T. N. 🤖
Made with ❤️ by @Vtn21
- AMASS by Nima Ghorbani
- STAR by Ahmed A. A. Osman
- SPL by Emre Aksan and Manuel Kaufmann