This is a demo repository for OpenTMA. The purpose of this repository is to demonstrate the capabilities of OpenTMA.
❓ How to use this repository?
- Clone this repository to your local machine.
git clone https://github.com/LinghaoChan/OpenTMA-demo.git
- Install OpenTMA.
pip install -r requirements.txt
- Download the pre-trained model and the motion/text/sbert embeddings from the Google Drive. Unzip the file and put the
embeddings
folder and thetextencoder.ckpt
in the root directory of this repository. Your file tree should look like this:
OpenTMA-demo/
├── amass-annotations
├── app.py
├── embeddings
│ ├── caption.txt
│ ├── motion_embedding.npy
│ ├── names.txt
│ ├── sbert_embedding.npy
│ └── text_embedding.npy
├── load.py
├── mld
├── model.py
├── README.md
├── requirements.txt
├── TEST_embeddings
│ ├── caption.txt
│ ├── motion_embedding.npy
│ ├── names.txt
│ ├── sbert_embedding.npy
│ └── text_embedding.npy
├── test_temos.py
└── textencoder.ckpt
- Run the demo.
python app.py
We also deploy our demo on SwanHub, see demo.
For the details of the project and the citation, please refer to the HumanTOMATO project.