/SkateFormer

[ECCV 2024] Official repository of SkateFormer

Primary LanguagePythonMIT LicenseMIT

SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition (ECCV 2024)

Jeonghyeok Do Munchurl Kim
Corresponding author

This repository is the official PyTorch implementation of "SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition". SkateFormer achieves state-of-the-art performance in both skeleton-based action recognition and interaction recognition.

Network Architecture

overall_structure


📧 News

  • Sep 26, 2024: Youtube video about SkateFormer is uploaded ✨
  • Jul 1, 2024: SkateFormer accepted to ECCV 2024 🎉
  • Jun 11, 2024: Codes of SkateFormer (including the training, testing code, and pretrained model) are released 🔥
  • Mar 19, 2024: This repository is created

Reference

@misc{do2024skateformer,
      title={SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition},
      author={Jeonghyeok Do and Munchurl Kim},
      year={2024},
      eprint={2403.09508},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Contents

Requirements

  • Python >= 3.9.16
  • PyTorch >= 1.12.1
  • Platforms: Ubuntu 22.04, CUDA 11.6
  • We have included a dependency file for our experimental environment. To install all dependencies, create a new Anaconda virtual environment and execute the provided file. Run conda env create -f requirements.yaml.
  • Run pip install -e torchlight.

Data Preparation

Download datasets

There are 3 datasets to download:

  • NTU RGB+D
  • NTU RGB+D 120
  • NW-UCLA

NTU RGB+D and NTU RGB+D 120

  1. Request dataset here
  2. Download the skeleton-only datasets:
    1. nturgbd_skeletons_s001_to_s017.zip (NTU RGB+D)
    2. nturgbd_skeletons_s018_to_s032.zip (NTU RGB+D 120)
    3. Extract above files to ./data/nturgbd_raw

NW-UCLA

  1. Download dataset from here
  2. Move all_sqe to ./data/NW-UCLA

Data Processing

Directory Structure

  • Put downloaded data into the following directory structure:
- data/
  - NW-UCLA/
    - all_sqe
      ... # raw data of NW-UCLA
  - ntu/
  - ntu120/
  - nturgbd_raw/
    - nturgb+d_skeletons/     # from `nturgbd_skeletons_s001_to_s017.zip`
      ...
    - nturgb+d_skeletons120/  # from `nturgbd_skeletons_s018_to_s032.zip`
      ...

Generating Data

  • Generate NTU RGB+D or NTU RGB+D 120 dataset:
 cd ./data/ntu # or cd ./data/ntu120
 # Get skeleton of each performer
 python get_raw_skes_data.py
 # Remove the bad skeleton 
 python get_raw_denoised_data.py
 # Transform the skeleton to the center of the first frame
 python seq_transformation.py

Pretrained Model

Pre-trained model can be downloaded from here.

  • pretrained.zip: trained on NTU RGB+D, NTU RGB+D 120, NTU-Inter, NTU-Inter 120 and NW-UCLA.

Training

# Download code
git clone https://github.com/KAIST-VICLab/SkateFormer
cd SkateFormer

# Train SkateFormer on NTU RGB+D X-Sub60 dataset (joint modality)
python main.py --config ./config/train/ntu_cs/SkateFormer_j.yaml

# Train SkateFormer on NTU RGB+D X-Sub60 dataset (bone modality)
python main.py --config ./config/train/ntu_cs/SkateFormer_b.yaml

# Train SkateFormer on NTU-Inter X-View60 dataset (joint modality)
python main.py --config ./config/train/ntu_cv_inter/SkateFormer_j.yaml

# Train SkateFormer on NTU-Inter 120 X-Set120 dataset (joint modality)
python main.py --config ./config/train/ntu120_cset_inter/SkateFormer_j.yaml 

# Train SkateFormer on NW-UCLA dataset (joint modality)
python main.py --config ./config/train/nw_ucla/SkateFormer_j.yaml

Testing

# Test SkateFormer on NTU RGB+D X-View60 dataset (joint modality)
python main.py --config ./config/test/ntu_cv/SkateFormer_j.yaml

# Test SkateFormer on NTU RGB+D 120 X-Sub120 dataset (joint modality)
python main.py --config ./config/test/ntu120_csub/SkateFormer_j.yaml

# Test SkateFormer on NW-UCLA dataset (bone modality)
python main.py --config ./config/test/nw_ucla/SkateFormer_b.yaml

Results

Please visit our project page for more experimental results.

License

The source codes including the checkpoint can be freely used for research and education only. Any commercial use should get formal permission from the principal investigator (Prof. Munchurl Kim, mkimee@kaist.ac.kr).

Acknowledgement

This repository is built upon FMA-Net, with data processing techniques adapted from SGN and HD-GCN.