- save all images and labels in two separate folders (image and corresponding label should have the same name, i.e.: 'img00001.png')
- define in src/data/segmentation_dataset.py the path where the images and labels are stored
- create the .txt files in get_img_names.ipynb. For this, define the path of the images in this file
- open the autoencoder.ipynb folder and define all necesary paths
Characteristics and definition - what the dataset has and what it does not have.
- Preprocessing dataset: Data augmentation, also understanding more on hand to understand results, edge cases
- Expanding dataset (via data augmentation techniques with skin color)
- Discuss processing architecture :
- Train the model - divide training, validation etc.
- Fine tune the parameters and iterate
Define knowledge into the system. Also we need to come up with Get an image -> Is the image good -> Preprocess the image for inputs to the algorithm -> Process with learned model -> Compute landing position -> Check if the answer is correct