Image Test Time Augmentation with Paddle2.0!
Input
| # input batch of images
/ / /|\ \ \ # apply augmentations (flips, rotation, scale, etc.)
| | | | | | | # pass augmented batches through model
| | | | | | | # reverse transformations for each batch of masks/labels
\ \ \|/ / / # merge predictions (mean, max, gmean, etc.)
| # output batch of masks/labels
Output
We support that you can use the following to test after defining the network.
Segmentation model wrapping [docstring]:
import patta as tta
tta_model = tta.SegmentationTTAWrapper(model, tta.aliases.d4_transform(), merge_mode='mean')
Classification model wrapping [docstring]:
tta_model = tta.ClassificationTTAWrapper(model, tta.aliases.five_crop_transform())
Keypoints model wrapping [docstring]:
tta_model = tta.KeypointsTTAWrapper(model, tta.aliases.flip_transform(), scaled=True)
Note: the model must return keypoints in the format Tensor([x1, y1, ..., xn, yn])
We support that you can use the following to test when you have the static model: *.pdmodel
、*.pdiparams
、*.pdiparams.info
.
Load model [docstring]:
import patta as tta
model = tta.load_model(path='output/model')
Segmentation model wrapping [docstring]:
tta_model = tta.SegmentationTTAWrapper(model, tta.aliases.d4_transform(), merge_mode='mean')
Classification model wrapping [docstring]:
tta_model = tta.ClassificationTTAWrapper(model, tta.aliases.five_crop_transform())
Keypoints model wrapping [docstring]:
tta_model = tta.KeypointsTTAWrapper(model, tta.aliases.flip_transform(), scaled=True)
Segmentation model [docstring]:
We recommend modifying the file seg.py
according to your own model.
python seg.py --model_path='output/model' \
--batch_size=16 \
--test_dataset='test.txt'
Note: Related to paddleseg
# defined 2 * 2 * 3 * 3 = 36 augmentations !
transforms = tta.Compose(
[
tta.HorizontalFlip(),
tta.Rotate90(angles=[0, 180]),
tta.Scale(scales=[1, 2, 4]),
tta.Multiply(factors=[0.9, 1, 1.1]),
]
)
tta_model = tta.SegmentationTTAWrapper(model, transforms)
# Example how to process ONE batch on images with TTA
# Here `image`/`mask` are 4D tensors (B, C, H, W), `label` is 2D tensor (B, N)
for transformer in transforms: # custom transforms or e.g. tta.aliases.d4_transform()
# augment image
augmented_image = transformer.augment_image(image)
# pass to model
model_output = model(augmented_image, another_input_data)
# reverse augmentation for mask and label
deaug_mask = transformer.deaugment_mask(model_output['mask'])
deaug_label = transformer.deaugment_label(model_output['label'])
# save results
labels.append(deaug_mask)
masks.append(deaug_label)
# reduce results as you want, e.g mean/max/min
label = mean(labels)
mask = mean(masks)
Transform | Parameters | Values |
---|---|---|
HorizontalFlip | - | - |
VerticalFlip | - | - |
HorizontalShift | shifts | List[float] |
VerticalShift | shifts | List[float] |
Rotate90 | angles | List[0, 90, 180, 270] |
Scale | scales interpolation |
List[float] "nearest"/"linear" |
Resize | sizes original_size interpolation |
List[Tuple[int, int]] Tuple[int,int] "nearest"/"linear" |
Add | values | List[float] |
Multiply | factors | List[float] |
FiveCrops | crop_height crop_width |
int int |
AdjustContrast | factors | List[float] |
AdjustBrightness | factors | List[float] |
AverageBlur | kernel_sizes | List[Union[Tuple[int, int], int]] |
GaussianBlur | kernel_sizes sigma |
List[Union[Tuple[int, int], int]] Optional[Union[Tuple[float, float], float]] |
Sharpen | kernel_sizes | List[int] |
- flip_transform (horizontal + vertical flips)
- hflip_transform (horizontal flip)
- d4_transform (flips + rotation 0, 90, 180, 270)
- multiscale_transform (scale transform, take scales as input parameter)
- five_crop_transform (corner crops + center crop)
- ten_crop_transform (five crops + five crops on horizontal flip)
- mean
- gmean (geometric mean)
- sum
- max
- min
- tsharpen (temperature sharpen with t=0.5)
PyPI:
# Use pip install PaTTA
$ pip install patta
or
# After downloading the whole dir
$ git clone https://github.com/AgentMaker/PaTTA.git
$ pip install PaTTA/
python -m pytest
Email : agentmaker@163.com
QQ Group : 1005109853