This is a demo repository for OpenTMA. The purpose of this repository is to demonstrate the capabilities of OpenTMA.

❓ How to use this repository?

  1. Clone this repository to your local machine.
git clone https://github.com/LinghaoChan/OpenTMA-demo.git
  1. Install OpenTMA.
pip install -r requirements.txt
  1. Download the pre-trained model and the motion/text/sbert embeddings from the Google Drive. Unzip the file and put the embeddings folder and the textencoder.ckpt in the root directory of this repository. Your file tree should look like this:
OpenTMA-demo/
├── amass-annotations
├── app.py
├── embeddings
│   ├── caption.txt
│   ├── motion_embedding.npy
│   ├── names.txt
│   ├── sbert_embedding.npy
│   └── text_embedding.npy
├── load.py
├── mld
├── model.py
├── README.md
├── requirements.txt
├── TEST_embeddings
│   ├── caption.txt
│   ├── motion_embedding.npy
│   ├── names.txt
│   ├── sbert_embedding.npy
│   └── text_embedding.npy
├── test_temos.py
└── textencoder.ckpt
  1. Run the demo.
python app.py

We also deploy our demo on SwanHub, see demo.

For the details of the project and the citation, please refer to the HumanTOMATO project.