/anomaly-captioning

Inferencing video captioning models on anomaly videos

Primary LanguagePython

anomaly-captioning

This repository contains the code for visualizing inferenced video captioining results on anomaly videos (e.g., UCF Crime dataset)

To run PDVC inference, follow PDVC to download the weights to: model-best.pth and prepare test videos (at: PDVC/visualization/sample/videos). Then run run_custom_video.sh and check the output at PDVC/visualization/sample/videos.

To run BMT, follow BMT or tutorial to download the pretrained weights. Then follow the script for inference steps. Download the weights to: best_prop_model.pt and best_cap_model.pt. Prepare test videos, process the generated captions (.json) and visualize the captioned video outputs at BMT/sample_output.