/Time-Series-Library_skeleton

Time series Forecasting on the NTU_RGB+D skeleton dataset using AutoFormer and FEDFormer

Primary LanguagePythonMIT LicenseMIT


Logo

Time-series-Forecasting babygarches

Deep learning models to predict human skeleton
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. Pipeline of code

About Time-series-Forcasting babygarches

The goal of this project is to predict skeleton using Deep-learning architectures especially using FEDFormers and AutoFormers.
It relies heavily on Time-series Library from thuml
Don't forget to give the project and thuml's project a star! Thanks again!

(back to top)

Usage

this repo provides several features:

  • you can preprocess the NTU_RGB+D dataset efficiently. The implementation is in the folder data_loader
  • you can train FEDFormers and AutoFormers on this dataset thanks to exp_Long_Term_Forecast
  • you can plot your results. They are stored in test_results after the test of your model. if you just want to plot the skeleton, you can look here

You can see how our model behaves on the dataset in the folder videos_example. Please notice that theirs is still a lot of possible improvements.
I added on every folder a readme to help you to grasp what functions are supposed to do.
A FAQ is as well available for any further technical questions. This comments are unfortunately in French.
If you want to use fast some function of this repo, I added a COMMANDE_UTILE.ipynb which is supposed to summarize the usual functions.

Getting Started

To get a local copy up and running follow these simple steps.

Installation

  1. Clone the repo

    git clone https://github.com/gardiens/Time-Series-Library_babygarches.git
  2. install python requirement

 pip install requirements.txt
  1. If you want to use NTU_RGB download the dataset here

  2. run txt2npy. the file .npy should be stored in dataset/NTU_RGB+D/numpyed/ and the raw data should be in dataset/NTU_RGB+D/raw/

  3. build the csv for the data. it may take a while

  python3 build_csv.py
  1. then run the main.py with your argument :) Some scripts are provided in the scripts folder. for example:
  sh scripts/utils/template_script.sh
  1. You can deep dive on your results with several tools. Videos of some sample are stored in the folder test_results, a dataframe of the loss of each sample is stored in results and you can see your runs in the folder runs thanks to Tensorboard if you are working on the dataset NTU RGB+D you may need to download ffmpeg to see videos.

(back to top)

Roadmap

This is the roadmap if you want to push the model further, however I will not update the repo in a close future

Non technical roadmap

  • Insert Categorical value in the prediction.
  • Insert Wavelet Transform

more technical roadmap

  • rewrite the preprocess step to be easier to add new steps.
  • write on Pytorch the preprocessing steps.
  • Ease the fetch of new results and get faster insights on the results. it means to fetch faster the data and have more visual analysis of the models ( gradient/non zero layers..)

(back to top)

Contact

Project Link: https://github.com/gardiens/Time-Series-Library_babygarches
you can contact me by email ( pierrick.bournez@student-cs.fr ).
If you have new idea or findings, you can talk to Mr rambaud : philippe.rambaud@lisn.fr
Please star if you find this repo useful :)
you can have access of some insights of my experience here if you are lucky enough

Citation

Incoming

Acknowledgement

This library is constructed based on this repo :

(technical:) Pipeline of the code

the code is organized as follow:

  1. When you run main.py, it builds an instance of exp/Long_term_forecasting which is the pipeline of the training/test
  2. it find the dataset on dataset/your_dataset and builds the model in models/your_model. it eventually runs the training/test code
  3. you can fetch the result and have logs on several folder.
    • In test_results you can see videos of your model after the training session,
    • in results you have a results_df.csv which is a dataframe that give the loss of every sample of the model.
    • in runs you have the tensorboards logs of the run.

the setting name is supposed to be a unique ID of each models run.

(back to top)