Image Data Augmentation
Introduction
This script is used to augment image data created using LabelMe-MIT.
It also creates new json files for the newly generated images, i.e., it augments both the image as well as it's annotation.
Constraints
- LabelMe should be used for annotating images
- Annotation should be a closed polygon/bounding box
- There should be one annotation in an image
This script can work for images with multiple annotation as well, but it will only take into account the first annotation or the first user-specified class annotation.
Description
It copies the annotated portion from the reference image(input annotated image), processes it according to the user's instructions, which can be provided in the YAML file. And then paste it on one of the given random background images.
- iseg_aug_yaml.py
- input.yaml
Transforms
-
Downscale
-
Upscale
-
Rotation
-
Horizontal Flip
-
Vertical Flip
-
Random Shift
-
Blur - Averaging, Gaussian Blurring, Median Blurring, Bilateral Filtering
-
Noise - Gauss, Salt and Pepper, Poisson, Speckle
-
Grayscale
-
Brightness and Contrast
-
Canny Edge Detection
Options other than transforms
-
Threshold Ratio - Ratio of annotated area to background image area. Combinations below this ratio will be neglected.
-
User Class - Choose a specific class on which to do transformations
-
Pad Annotation - Amount of padding you want to add to the annotation
Lantern | Apple | Astronaut | |
---|---|---|---|
Inputs | |||
Backgrounds | |||
Output |