Step1. Install MOT-tools.
git clone https://github.com/guanzhiyu817/MOT-tools.git
cd MOT-tools
pip3 install -r requirements.txt
python3 setup.py develop
Step2. Install pycocotools.
pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
Step3. Others
pip3 install cython_bbox
Download MOT17, MOT20 , DanceTrackand put them under /datasets in the following structure:
datasets
|——————MOT17
| └——————train
| └——————test
└——————MOT20
| └——————train
| └——————test
└——————dancetrack
| └——————train
| └——————val
| └——————test
Converting MOT17 dataset to coco format
python3 tools/convert_mot17_to_coco.py
Converting MOT20 dataset to coco format
python3 tools/convert_mot20_to_coco.py
Converting DanceTrack dataset to coco format
python3 tools/convert_dance_to_coco.py
Use the video xxx.mp4 to get xxx_converted.mp4, the new video file has the same size and frame rate as the original video
python3 tools/convert_video.py
Evaluation of object tracking results and interpolation of trajectories
python tools/interpolation.py
Evaluation of tracking results
Converting Pytorch Models to TensorRT Models
python tools/trt.py -c best_ckpt.pth.tar
There are two functions, txt2img and img2video, which convert a text file containing ground-truth information into an image, and the resulting image sequence into a video file, respectively
python tools/txt2video.py