Asiegbu Miracle Kanu-Asiegbu, Ram Vasudevan, and Xiaoxiao Du
ICML 2022 Workshop on Safe Learning for Autonomous Driving Video
ICML 2022 Workshop on Safe Learning for Autonomous Driving Paper, same as Arxiv version
git clone --recurse-submodules https://github.com/akanuasiegbu/BiPOCO.git
cd docker
- run
./build.sh
- use
./run.sh
to enter docker image
- torch==1.4.0
- torchvision==0.5.0
- matplotlib==3.4.1
- tqdm==4.36.1
- yacs==0.1.8
- Pillow==7.0.0
- tensorboardx==2.2
- wandb==0.10.25
- scikit-learn==0.24.1
- opencv-python==4.5.1.48
- coloredlogs==15.0
- termcolor==1.1.0
- dill==0.3.3
- six==1.13.0
- scipy==1.6.2
- seaborn==0.11.0
- pandas==1.1.2
- more_itertools==8.8.0
- protobuf==3.15.8
- The inputted data into BiTrap for train and test poses can be found in this folder.
- Next download the json files and put them in a folder. Then in
bitrap/datasets/config_for_my_data.py
setloc['data_load']['avenue']['train_poses']
.loc['data_load']['avenue']['test_poses']
,loc['data_load']['st']['train_poses']
, andloc['data_load']['st']['test_poses']
to the correct directory.
- Next download the json files and put them in a folder. Then in
- To recreate the pose input data
- Download Avenue and ShanghaiTech dataset
- Run AlphaPose (commit number ddaf4b9) on the Avenue and ShanghaiTech video frames to obtain pose trajectory
- Config file used was
configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml
- Pretrained model used was
pretrained_models/fast_res50_256x192.pth
- Tracker used was Human-ReID based tracking (
--pose_track
)
- Config file used was
- Next with json files from AlphaPose add the anomaly labels with the
add_to_json_file.py
for only the testing data
Users can train the BiTraP models on Avenue and ShanghaiTech dataset easily by runing the following command:
Train on Avenue Dataset
cd bitrap
python tools/train.py --config_file configs/avenue_pose_hc.yml
Train on ShanghaiTech Dataset
cd bitrap
python tools/train.py --config_file configs/st_pose_hc.yml
To train/inferece on CPU or GPU, simply add DEVICE='cpu'
or DEVICE='cuda'
. By default we use GPU for both training and inferencing.
Note that you must set the input and output lengths to be the same in YML file used (INPUT_LEN
and PRED_LEN
) and bitrap/datasets/config_for_my_data.py
(input_seq
and pred_seq
)
Pretrained models for Avenue and ShanghaiTech can found.
Pkl files of the best performing configuration bolded in table 2 and 3 can be found.
TO obtain the rest of the pkl files for the pose trajectory for first-person (ego-centric) view Avenue and ShanghaiTech datasets use commands below.
Test on Avenue dataset:
cd bitrap
python tools/test.py --config_file configs/avenue_pose_hc.yml CKPT_DIR **DIR_TO_CKPT**
Test on ShanghaiTech dataset:
cd bitrap
python tools/test.py --config_file configs/st_pose_hc.yml CKPT_DIR **DIR_TO_CKPT**
Note that you must set the input and output lengths to be the same in YML file used (INPUT_LEN
and PRED_LEN
) and bitrap/datasets/config_for_my_data.py
(input_seq
and pred_seq
)
Training and inference is done with the predictor model. Given the PKL output files from inference we can obtain AUC score by following the instructions below.
- In
config/config.py
changeinput_seq
andpred_seq
to match input and output sequence length. - Also in
config/config.py
make sure to changeexp['data']
to matchhr-st
,st
,avenue
orhr-avenue
- Also in
config/config.py
make sure to changeexp['errortype']
to matcherror_summed
orerror_flattened
In experiments_code/main.py
change variable file_to_load
to point to correct pkl file.
Look at experiments_code/run_q.py
If you found repo useful, feel free to cite.
@article{kanu2022bipoco,
title={BiPOCO: Bi-Directional Trajectory Prediction with Pose Constraints for Pedestrian Anomaly Detection},
author={Kanu-Asiegbu, Asiegbu Miracle and Vasudevan, Ram and Du, Xiaoxiao},
journal={arXiv preprint arXiv:2207.02281},
year={2022}
}