Batch equivalent of PyTorch Transforms.
transform_batch = transforms.Compose([
ToTensor(),
Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
for images in data_iterator:
images = transform_batch(images)
output = model(images)
Applies the equivalent of torchvision.transforms.Normalize
to a batch of images.
Note: This transform acts out of place by default, i.e., it does not mutate the input tensor.
- mean (sequence) – Sequence of means for each channel.
- std (sequence) – Sequence of standard deviations for each channel.
- inplace (bool,optional) – Bool to make this operation in-place.
- dtype (torch.dtype,optional) – The data type of tensors to which the transform will be applied.
- device (torch.device,optional) – The device of tensors to which the transform will be applied.
- tensor (Tensor) – Tensor of size (N, C, H, W) to be normalized.
Applies the equivalent of torchvision.transforms.RandomCrop
to a batch of images. Images are independently transformed.
- size (int) – Desired output size of the crop.
- padding (int, optional) – Optional padding on each border of the image. Default is None, i.e no padding.
- device (torch.device,optional) – The device of tensors to which the transform will be applied.
- tensor (Tensor) – Tensor of size (N, C, H, W) to be randomly cropped.
Applies the equivalent of torchvision.transforms.RandomHorizontalFlip
to a batch of images. Images are independently transformed.
Note: This transform acts out of place by default, i.e., it does not mutate the input tensor.
- p (float) – probability of an image being flipped.
- inplace (bool,optional) – Bool to make this operation in-place.
- tensor (Tensor) – Tensor of size (N, C, H, W) to be randomly flipped.
Applies the equivalent of torchvision.transforms.ToTensor
to a batch of images.
- tensor (Tensor) – Tensor of size (N, C, H, W) to be tensorized.