| [arXiv
] | [Paper
] |
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024
The current code was written and tested on top of pytorch/pytorch:2.0.1-cuda11.7-cudnn8-devel
docker. To install the dependencies, run the following:
pip install -r requirements.txt
Download and setup the dataset using following cmd:
bash dataset_download.sh
To train the model run both Stage 1 and 2 sequentially:
python train.py
Modify these parameters in config.py
num_epochs: 140
save_log_weights_interval: 20
train_metric_interval: 20
learning_rate: 5e-4
steps: []
stage1: True
and run:
python train.py
To test the model run
python test.py --ckpt {Path to checkpoint}
Benchmark scores mentioned in the paper uses SLT's evaluation code which is given at eval/
For CAD/frog alone delete groundtruth images from 021_gt.png onwards since those are empty masks and they throw an error with the matlab code
Change the following paths according to where you are saving the predictions and where you have placed the dataset for main_CAD.m
and main_MoCA.m
:
resPath = ['../best/' seqfolder '/'] % Enter the path of the results
Before running the Matlab scripts make sure you have Deep Learning Tool box installed in Matlab. Run the following Matlab scripts:
main_CAD.m
main_MoCA.m
@InProceedings{Meeran_2024_CVPR,
author = {Meeran, Muhammad Nawfal and T, Gokul Adethya and Mantha, Bhanu Pratyush},
title = {SAM-PM: Enhancing Video Camouflaged Object Detection using Spatio-Temporal Attention},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2024},
pages = {1857-1866}
}