The TIL2023 CV Qualifiers code repository
Run this command to clone the repository
git clone https://github.com/til-23/til-23-cv.git
To install the requirements, create a virtual environment and install the yolov5 requirements. To use our pretrained weights, you also need to install yolov5.
We provide yolov5 pretrained weights for you to finetune your models as a base, but you are free to use other object detection libraries. To finetune the weights on your dataset, run the following command from the yolov5 repo.
python train.py --data coco.yaml --epochs 300 --weights 'pretrained_weights.pt' --cfg yolov5n.yaml --batch-size 128
You can also refer to this tutorial on training a yolov5 model.
Refer to src/reID
. The directory contains the following files:
dataset.py
- This file converts your images into atorch.utils.data.Dataset
class. You will need to have your cropped images of your plushies and in the LFW format for it to be compatible.transforms.py
- This file preprocesses your images to ensure they're ingestible by the model. The most important preprocessing step is to resize the image to a standard size before they're passed into the model.model.py
- This file contains the Siamese Network. This is the model you will train.train.py
- This file contains the code to fit your model to the dataset.test.py
- This file lets you test your model on a pair of plushie images.utils.py
- This file contains misc functions that you could use.model.pth
- A pretrained reID model as a baseline
We have created a boilerplate code that allows you to detect plushies in a scene, and ReID a particular plushie from the detected plushies:
python3 src/inference.py