By Daniel Oñoro-Rubio and Roberto J. López-Sastre.
GRAM, University of Alcalá, Alcalá de Henares, Spain.
This is the official code implementation of the work described in our ECCV 2016 paper.
This repository provides the implementation of CCNN and Hydra models for object counting.
Was our code useful for you? Please cite us:
@inproceedings{onoro2016,
Author = {O\~noro-Rubio, D. and L\'opez-Sastre, R.~J.},
Title = {Towards perspective-free object counting with deep learning},
Booktitle = {ECCV},
Year = {2016}
}
The license information of this project is described in the file "LICENSE.txt".
- Requirements: software
- Requirements: hardware
- Basic installation
- Demo
- How to reproduce the results of the paper
- Remarks
- Acknowledgements
-
Use a Linux distribution. We have developed and tested the code on Ubuntu.
-
Requirements for
Caffe
andpycaffe
. Follow the Caffe installation instructions.
Note: Caffe must be built with support for Python layers!
# In your Makefile.config, make sure to have this line uncommented
WITH_PYTHON_LAYER := 1
- Python packages you need:
cython
,python-opencv
,python-h5py
,easydict
,pillow (version >= 3.4.2)
.
This code allows the usage of CPU and GPU, but we strongly recommend the usage of GPU.
-
For training, we recommend using a GPU with at least 3GB of memory.
-
For testing, a GPU with 2GB of memory is enough.
-
Be sure you have added to your
PATH
thetools
directory of yourCaffe
installation:export PATH=<your_caffe_root_path>/build/tools:$PATH
-
Be sure you have added your
pycaffe
compilation into yourPYTHONPATH
:export PYTHONPATH=<your_caffe_root_path>/python:$PYTHONPATH
We here provide a demo consisting in predicting the number of vehicles in the test images of the TRANCOS dataset, which was used in our ECCV paper.
This demo uses the CCNN model described in the paper. The results reported in the paper can be reproduced with this demo.
To run the demo, these are the steps to follow:
-
Download the TRANCOS dataset by executing the following script provided:
./tools/get_trancos.sh
-
You must have now a new directory with the TRANCOS dataset in the path
data/TRANCOS
. -
Download the TRANCOS CCNN pretrained model.
./tools/get_trancos_model.sh
-
Finally, to run the demo, simply execute the following command:
./tools/demo.sh
We provide here the scripts needed to train and test all the models (CCNN and Hydra) with the datasets used in our ECCV paper. These are the steps to follow.
In order to download and setup a dataset we recommend to use our scripts. To do so, just place yourself in the $PROJECT directory and run one of the following scripts:
-
./tools/get_trancos.sh
-
./tools/get_ucsd.sh
-
./tools/get_ucf.sh
Note: Make sure the folder "data/" does not already contain the dataset.
All our pre-trained models can be downloaded using the corresponding script:
```Shell
./tools/get_all_DATASET_CHOSEN_models.sh
```
Simply substitute DATASET_CHOSEN by: trancos, ucsd or ucf.
-
Edit the corresponding script $PROJECT/experiments/scripts/DATASET_CHOSEN_test_pretrained.sh
-
Run the corresponding scripts.
```Shell
./experiments/scripts/DATASET_CHOSEN_test_pretrained.sh
Note that this pretrained models will let you reproduce the results in our paper.
-
Edit the launching script (e.g.: $PROJECT/experiments/scripts/DATASET_CHOSEN_train_test.sh).
-
Place you in $PROJECT folder and run the launching script by typing:
./experiments/scripts/DATASET_CHOSEN_train_test.sh
In order to provide a better distribution, this repository unifies and reimplements in Python some of the original modules. Due to these changes in the libraries used, the results produced by this software might be slightly different from the ones reported in the paper.
This work is supported by the projects of the DGT with references SPIP2014-1468 and SPIP2015-01809, and the project of the MINECO TEC2013-45183-R.