An architecture for learning the feature representations for target re-identification
in long-term DCF Tracking
The KCF with HOG feature tracker in baseline
referenced pyTrackers under the MIT license.
The DCFNet baseline in models
referenced DCFNet under the MIT license.
See requirement.txt
KCF_HOG()
in baseline/kcf.py
is the only version, used in testing and demo
DCFNet()
in models/DCFnet.py
is for training, which takes (template, search) as input for the forward
method
DCFNet()
in models/DCFnet_track.py
is for tracking and re-id testing in test.py
, which only takes (search) as input for the forward
method, and updates the template in the update
method.
DCFNetTracker()
in models/DCFnet_track.py
can be used for continous tracking with the track
method and re-id testing with the runResponseAnalysis
, runRotationAnalysis
methods
SqueezeCFNet()
in models/squeezeCFnet.py
is for training, which takes (template, search, negative) as input for the forward
method
SqueezeCFNet()
in models/squeezeCFnet_track.py
is for tracking and re-id testing in test.py
, which only takes (search) as input for the forward
method, and updates the template in the update
method.
SqueezeCFNetTracker()
in models/squeezeCFnet_track.py
can be used for continous tracking with the track
method and re-id testing with the runResponseAnalysis
, runRotationAnalysis
methods
SqueezeCFNet_light()
and SqueezeCFNetTracker_light()
in models/squeezeCFnet_track.py
is for tracking and speed testing in speed_test.py
, which skips the encoding stage and only process the shallow part of the network in forward pass.
- Raw training and validation data are downloaded from FathomNet using the fathomnet-py API. Examples of the raw FathomNet data are
curate_dataset/data_sample/FathomNet_sample.*
- Then run
curate_dataset/gen_patch.py
to generate training and validation image patches. Replacefolder_list
directory to the root directory of raw FathomNet data, anddataset_root
directory with a new directory for generated training and validation image patches. - Then run
curate_dataset/gen_json.py
(Replacedataset_root
with the directory of the generated image patches) to generate the dataset json file that links to all the image patches. Some examples of the json files arecurate_dataset/data_sample/FathomNet*.json
$ python train.py --dataset <path-to-dataset *.json file> [options]
$ python train_DCFNet.py --dataset <path-to-dataset *.json file> [options]
$ python test.py --seq-root <root directory to image sequence folders> --json-path <path to dataset *.json file> --test-mode <0:re-id on image sequence, 1:re-id on FathomNet training set, 2:re-id on transformation>
- Test mode 1: re-id on labeled images from image sequence data
- Test mode 2: re-id on FathomNet training images
- Test mode 3: re-id on images from the image sequence data after transformations (rotations, flipping etc.)
The image sequences for testing need to be of the following structure. Each image sequence comes from a continous tracked video. The anntoation is done at every 50 frames using the VGG Image Annotator. An example of the json annotation file can be found at curate_dataset/data_sample/annotation.json
.
├── seq-root
│ ├── seq1
│ │ ├── *.jpg
│ │ ├── str(frame_number).zfill(6).jpg
│ │ ├── *.jpg
│ │ ├── annotation.json
│ ...
│ ├── seqN
│ │ ├── *.jpg
│ │ ├── str(frame_number).zfill(6).jpg
│ │ ├── *.jpg
│ │ ├── annotation.json
- Demo
Use the functionprocessImSeq
indemo.py
to perform tracking in continous image sequences.
Use the functionanalyzeImSeq
indemo.py
to get confidence scores on all labeled object from three different types of trackers.
need to update the image sequence directory in script before use. - Speed test
Runspeed_test.py
and replace the image sequence directory in script before use.