/astnet

This is an official implementation for "Attention-based Residual Autoencoder for Video Anomaly Detection".

Primary LanguagePythonMIT LicenseMIT

ASTNet: Attention-based Residual Autoencoder for Video Anomaly Detection

This is the official implementation of Attention-based Residual Autoencoder for Video Anomaly Detection.

Updates

  • [5/25/2022] ASTNet is available online.
  • [4/21/2022] Code of ASTNet is released!

Prerequisites

  • Linux or macOS
  • Python 3
  • PyTorch 1.7.0

Setup

The code can be run with Python 3.6 and above.

Install the required packages:

pip install -r requirements.txt

Clone this repo:

git clone https://github.com/vt-le/astnet.git
cd astnet

Testing

Please first download the pre-trained model

Dataset Pretrained Model
UCSD Ped2 github / drive
CUHK Avenue github / drive
ShanghaiTech Campus github / drive

After preparing a dataset, you can test the dataset by running:

python astnet.py \
    --cfg /path/to/config/file \
    --model-file /path/to/pre-trained/model \
    GPUS [{GPU_index}]        

Datasets

A dataset is a directory with the following structure:

dataset
    ├── train
    │   └── ${video_id}$
    |       └──${frame_id}$.jpg
    ├── test
    │   └── ${video_id}$
    |       └──${frame_id}$.jpg
    └── $dataset$.mat

Citing

If you find our work useful for your research, please consider citing:

@article{le2022attention,
  title={Attention-based residual autoencoder for video anomaly detection},
  author={Le, Viet-Tuan and Kim, Yong-Guk},
  journal={Applied Intelligence},
  pages={1--15},
  year={2022},
  publisher={Springer}
}

Contact

For any question, please file an issue or contact:

Viet-Tuan Le: tuanlv@sju.ac.kr