$ wget https://pjreddie.com/media/files/yolov3.weights
- To Install LabelImg -
git clone https://github.com/heartexlabs/labelImg.git
conda create -n LabelImg python=3.9.13
conda activate LabelImg
pip install labelimg
- Open LabelImg
# activate the conda environment
conda activate LabelImg
# cd to the cloned labelImg-Repo
ls
cd lebelImg-Repo
ls
# open labelImg tool
python labelImg.py
- Annotation procedure;
# For training-set
python xml_to_csv.py --path_to_xml /path/to/train_images_folder --path_to_csv /path/to/train_images_folder/annotation.csv
# For testing-set
python xml_to_csv.py --path_to_xml /path/to/test_images_folder --path_to_csv /path/to/test_images_folder/annotation.csv
/img_xml_data/labelmap.pbtxt
# For train_gear_annotations.txt
python prepare_data.py --path_to_images /path/to/train_images_folder --path_to_csv_annotations /path/to/train_images_folder/annotation.csv --path_to_save_output /yolov3_data/train
# For test_gear_annotation.txt
python prepare_data.py --path_to_images /path/to/test_images_folder --path_to_csv_annotations /path/to/test_images_folder/annotation.csv --path_to_save_output /yolov3_data/test
/classes/gear_teeth.names
/core/config.py Change the preparameter & hyperparameters for model training based on the machine being trained on;
- add the class name path ('/classes/gear_teeth.names') / __C.YOLO.CLASSES
- training annotation path ('/dataset/train_gear_annotations.txt') / __C.TRAIN.ANNOT_PATH
- training batch size (depending on GPU size) / __C.TRAIN.BATCH_SIZE
- training input size of neurons (depending on GPU size) / __C.TRAIN.INPUT_SIZE
- data augmentation (True or False) / __C.TRAIN.DATA_AUG
- initial and final learning rate / __C.TRAIN.LR_INIT, __C.TRAIN.LR_END
- numbers of epochs / __C.TRAIN.EPOCHS
- testing annotation path ('/dataset/test_gear_annotations.txt') / __C.TEST.ANNOT_PATH
Note Model accuracy & performance will be depending on some of the hyperparameters such as epochs, batch size, neuron sizes and learning rate.
After the changing and adding some parameter in configuration file.
python train.py
After the model had been trained, the performance of the model was visualized and analyzed on Tensorboard using the trained log.
tensorboard --logdir './result_output/log'
Test the trained model on the testing set of data, and then check the result of tested image with bounding boxes in ./result_output/eval_detection
.
python test.py
To compute mAP of the trained model;
python mAP/main.py
The computed mAP can be checked in ./results/mAP.png
.
The mAP on each class achieved the best result with accurately detected score.
The trained weights of the model is saved as TF2 model format for production ready by running
python ckpt_to_savedModel.py
After running, the trained & production-ready model will be save in ./SavedModel/YOLOv3_model/
To inference the model;
python run_inference.py --path_to_images './inference_data/'
https://github.com/sniper0110/YOLOv3
https://github.com/YunYang1994/TensorFlow2.0-Examples/tree/master/4-Object_Detection/YOLOV3