Artificial intelligence model, training loop, concurrent model loader, inference
From now image processing runs concurrently
Full test with car detection
https://www.youtube.com/watch?v=VmhW1dGFPMw&t=1s&ab_channel=SamuelBachorik
Custom AI model trained on custom dataset (1000 photos).
GPU used = RTX 3060 12GB
Example:
AI Model created with PyTorch
.
The model is being trained on GPU, model's device is set to "CUDA"
Model architecture
- Encoder - decoder model
16x Conv2d
layers divided in encoders and decoderReLu
activatios +BatchNorm2d
- All of this in
nn.Sequential
- Kernel 3x3
- 17 360 898 Model Parameters
90-110 epoch is enough for this model with this dataset.
For this project I used mean squared error loss function = ((y - y_pred) ** 2).mean()
Dataset consists of 1300 photos from rainy and sunny city. 50% rainy / 50% sunny photos
Link to dataset photos -
Dataset
Example:
In Run_training.py
set folders with downloaded dataset like this
folders_training.append("C:/Users/Samuel/PycharmProjects/Conda/City_dataset/City_sunny1/")
folders_training.append("C:/Users/Samuel/PycharmProjects/Conda/City_dataset/City_sunny2/")
folders_training.append("C:/Users/Samuel/PycharmProjects/Conda/City_dataset/City_rainy/")
folders_training.append("C:/Users/Samuel/PycharmProjects/Conda/City_dataset/City_rainy2/")
folders_training.append("C:/Users/Samuel/PycharmProjects/Conda/City_dataset/City_2/")
Then Run Run_training.py
When model is trained you can run inference like this -
In segmentation_inference
set saved model Path
# Path to trained weights
self.PATH = "./Model1"
In Run_video_inference.py
set your desired video
cap = cv2.VideoCapture("C:/Users/Samuel/PycharmProjects/Condapytorch/City.mp4")
Run Run_video_inference.py
When inference is done you will get output.avi video.
If you want to run inference with pretrained model you need to download model trained wieghts -
You can choose -
Models weights
IMPORTANT: Make sure your downloaded model weights corresponds with Model architecture.
Model_weights_1
for Model 1
Model_weights_2
for Model 2
In segmentation_inference
set model Path
# Path to trained weights
self.PATH = "Model_weights_1.pth"
In Run_video_inference.py
set your desired video
cap = cv2.VideoCapture("C:/Users/Samuel/PycharmProjects/Condapytorch/City.mp4")
Run Run_video_inference.py
When inference is done you will get output.avi video.
For best result use video with aspect ratio 4:3
and ressolution 1024x768
You can test model on your own video or you can download here - Download Video