PyTorch code for https://arxiv.org/pdf/1806.07564.pdf
@article{ribera2019,
title={Locating Objects Without Bounding Boxes},
author={Javier Ribera and David G\"{u}era and Yuhao Chen and Edward J. Delp},
journal={Proceedings of the Computer Vision and Pattern Recognition (CVPR)},
month={June},
year={2019},
note={{Long Beach, CA}}
}
The datasets used in the paper can be downloaded from:
Use conda to recreate the environment provided with the code:
conda env create -f environment.yml
Activate the environment:
conda activate object-locator
Install the tool:
pip install .
Activate the environment:
conda activate object-locator
Run this to get help (usage instructions):
python -m object-locator.locate -h python -m object-locator.train -h
Example:
python -m object-locator.locate \ --dataset DIRECTORY \ --out DIRECTORY \ --model CHECKPOINTS \ --evaluate \ --no-gpu \ --radius 5
python -m object-locator.train \ --train-dir TRAINING_DIRECTORY \ --batch-size 32 \ --env-name sorghum \ --lr 1e-3 \ --val-dir TRAINING_DIRECTORY \ --optim Adam \ --save saved_model.ckpt
Models are trained separately for each of the four datasets, as described in the paper:
The COPYRIGHT of the pre-trained models is the same as in this repository.
conda deactivate object-locator conda env remove --name object-locator
The code used in the paper corresponds to the tag used-for-cvpr2019-submission
.
If you want to reproduce the results, checkout that tag with git checkout used-for-cvpr2019-submission
.
The master branch is the latest version available, with convenient bug fixes and better documentation.
If you want to develop or retrain your models, we recommend the master branch.
Versions numbers follow semantic versioning and the changelog is in CHANGELOG.md.