A Pytorch Implementation of "I3Net: Implicit Instance-Invariant Network for Adapting One-Stage Object Detectors".
Please follow ssd.pytorch respository to setup the environment. In this project, we use Pytorch 1.0.1 and CUDA version is 10.0.130.
- PASCAL_VOC 07+12: Please follow the instruction to prepare VOC dataset.
- Clipart/WaterColor/Comic: Please follow the instruction to prepare dataset.
- First download the fc-reduced VGG-16 PyTorch base network weights at: https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth
- By default, we assume you have downloaded the file in the
ssd.pytorch/weights
dir:
mkdir weights
cd weights
wget https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth
CUDA_VISIBLE_DEVICES=$GPU_ID \
python train_I3N.py \
--name file_name\
--dataset source_dataset --dataset_target target_dataset \
--basenet path_to_model
CUDA_VISIBLE_DEVICES=$GPU_ID \
python eval_I3N.py \
--dataset target_dataset\
--trained_model_path path_to_model
If you find this repository useful, please cite our paper:
@inproceedings{CHEN_2021_I3NET,
title={I3Net: Implicit Instance-Invariant Network for Adapting One-Stage Object Detectors},
author={Chen, Chaoqi and Zheng, Zebiao and Huang, Yue and Ding, Xinghao and Yu, Yizhou},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021}
}