UPDATE:
- The toolkit can be directly installed via
pip install got10k
!- Check our light-weighted repository SiamFC for a minimal example of training and evaluation using GOT-10k toolkit!
This repository contains the official python toolkit for running experiments and evaluate performance on GOT-10k benchmark. The code is written in pure python and is compile-free. Although we support both python2 and python3, we recommend python3 for better performance.
For convenience, the toolkit also provides unofficial implementation of dataset interfaces and tracking pipelines for OTB (2013/2015), VOT (2013~2018), DTB70, TColor128, NfS and UAV123 benchmarks.
GOT-10k is a large, high-diversity and one-shot database for training and evaluating generic purposed visual trackers. If you use the GOT-10k database or toolkits for a research publication, please consider citing:
"GOT-10k: A Large High-Diversity Benchmark for Generic Object Tracking in the Wild."
L. Huang, X. Zhao and K. Huang,
arXiv:1810.11981, 2018.
- Installation
- Quick Start: A Concise Example
- Quick Start: Jupyter Notebook for Off-the-Shelf Usage
- How to Define a Tracker?
- How to Run Experiments on GOT-10k?
- How to Evaluate Performance?
- How to Loop Over GOT-10k Dataset?
- Issues
Install the toolkit using pip
(recommended):
pip install --upgrade got10k
Or, alternatively, clone the repository and install dependencies:
git clone https://github.com/got-10k/toolkit.git
cd toolkit
pip install -r requirements.txt
Then directly copy got10k
folder to your workspace to use it.
Here is a simple example on how to use the toolkit to define a tracker, run experiments on GOT-10k and evaluate performance.
from got10k.trackers import Tracker
from got10k.experiments import ExperimentGOT10k
class IdentityTracker(Tracker):
def __init__(self):
super(IdentityTracker, self).__init__(name='IdentityTracker')
def init(self, image, box):
self.box = box
def update(self, image):
return self.box
if __name__ == '__main__':
# setup tracker
tracker = IdentityTracker()
# run experiments on GOT-10k (validation subset)
experiment = ExperimentGOT10k('data/GOT-10k', subset='val')
experiment.run(tracker, visualize=True)
# report performance
experiment.report([tracker.name])
To run experiments on OTB, VOT or other benchmarks, simply change ExperimentGOT10k
, e.g., to ExperimentOTB
or ExperimentVOT
, and root_dir
to their corresponding paths for this purpose.
Open quick_examples.ipynb in Jupyter Notebook to see more examples on toolkit usage.
To define a tracker using the toolkit, simply inherit and override init
and update
methods from the Tracker
class. Here is a simple example:
from got10k.trackers import Tracker
class IdentityTracker(Tracker):
def __init__(self):
super(IdentityTracker, self).__init__(
name='IdentityTracker', # tracker name
is_deterministic=True # stochastic (False) or deterministic (True)
)
def init(self, image, box):
self.box = box
def update(self, image):
return self.box
Instantiate an ExperimentGOT10k
object, and leave all experiment pipelines to its run
method:
from got10k.experiments import ExperimentGOT10k
# ... tracker definition ...
# instantiate a tracker
tracker = IdentityTracker()
# setup experiment (validation subset)
experiment = ExperimentGOT10k(
root_dir='data/GOT-10k', # GOT-10k's root directory
subset='val', # 'train' | 'val' | 'test'
result_dir='results', # where to store tracking results
report_dir='reports' # where to store evaluation reports
)
experiment.run(tracker, visualize=True)
The tracking results will be stored in result_dir
.
Use the report
method of ExperimentGOT10k
for this purpose:
# ... run experiments on GOT-10k ...
# report tracking performance
experiment.report([tracker.name])
When evaluated on the validation subset, the scores and curves will be directly generated in report_dir
.
However, when evaluated on the test subset, since all groundtruths are withholded, you will have to submit your results to the evaluation server for evaluation. The report
function will generate a .zip
file which can be directly uploaded for submission. For more instructions, see submission instruction.
See public evaluation results on GOT-10k's leaderboard.
The got10k.datasets.GOT10k
provides an iterable and indexable interface for GOT-10k's sequences. Here is an example:
from PIL import Image
from got10k.datasets import GOT10k
from got10k.utils.viz import show_frame
dataset = GOT10k(root_dir='data/GOT-10k', subset='train')
# indexing
img_file, anno = dataset[10]
# for-loop
for s, (img_files, anno) in enumerate(dataset):
seq_name = dataset.seq_names[s]
print('Sequence:', seq_name)
# show all frames
for f, img_file in enumerate(img_files):
image = Image.open(img_file)
show_frame(image, anno[f, :])
To loop over OTB
or VOT
datasets, simply change GOT10k
to OTB
or VOT
for this purpose.
Please report any problems or suggessions in the Issues page.