TensorFlow port of the tracking method described in the paper Fully-Convolutional Siamese nets for object tracking.
In particular, it is the improved version presented as baseline in End-to-end representation learning for Correlation Filter based tracking, which achieves state-of-the-art performance at high framerate. The other methods presented in the paper (similar performance, shallower network) haven't been ported yet.
- Get virtualenv if you don't have it already
pip install virtualenv
- Create new virtualenv with Python 2.7
virtualenv --python=/usr/bin/python2.7 ve-tracking
- Activate the virtualenv
source ~/tracking-ve/bin/activate
- Clone the repository
git clone https://github.com/torrvision/siamfc-tf.git
cd siamfc-tf
- Install the required packages
sudo pip install -r requirements.txt
mkdir pretrained data
- Download the pretrained networks in
pretrained
and unzip the archive (we will only usebaseline-conv5_e55.mat
) - Download video sequences in
data
and unzip the archive.
- Set
video
fromparameters.evaluation
to"all"
or to a specific sequence (e.g."vot2016_ball1"
) - See if you are happy with the default parameters in
parameters/hyperparameters.json
- Optionally enable visualization in
parameters/run.json
- Call the main script (within an active virtualenv session)
python run_tracker_evaluation.py