JonathonLuiten/TrackEval
HOTA (and other) evaluation metrics for Multi-Object Tracking (MOT).
PythonMIT
Issues
- 0
- 1
- 1
Negative metrics value
#147 opened - 1
Too Low HOTA and IDF1
#146 opened - 1
Numpy version
#145 opened - 1
- 7
- 1
- 0
3D MOT code
#139 opened - 1
how to calculate HOTA for DeepSORT?
#138 opened - 1
- 1
How are 2d bounding boxes labeled in kitti ?
#136 opened - 0
Evaluation of class "cyclist"
#135 opened - 2
- 1
seqinfo.ini error
#133 opened - 1
- 0
- 0
- 0
a method to modify the result file to csv
#129 opened - 0
Implementation of ai city challenge?
#127 opened - 2
How to evalute test set of kitti 2D
#126 opened - 1
- 2
- 3
How is ground truth derived?
#122 opened - 1
- 0
Support BEV or 3D tracking?
#120 opened - 2
What is the purpose of <conf> column?
#119 opened - 0
- 1
- 1
- 2
- 1
- 0
What is the evaluation format of OWTA?
#109 opened - 0
evaluated results of ByteTrack and Track eval
#108 opened - 2
- 0
- 2
Should MOTP be higher or lower?
#104 opened - 0
Preparing RLE for MOTS20 evaluation
#103 opened - 4
BUG: BURST requires `tabulate`
#101 opened - 0
Can someone tell me how to submit the kitti 2d track results to the kitti and get the result?
#95 opened - 1
- 1
- 0
not intuitive understanding documentation
#91 opened - 4
- 0
HOTA vs CLEAR
#89 opened - 0
- 1
Huge RAM usage during Evaluation
#87 opened - 0
Classification-aware HOTA
#86 opened - 0
Evaluate testing sets in MOT20
#85 opened - 0
overlap of masks in MOTS evaluation
#84 opened