/uda_weed_segmentation

Unsupervised domain adaptation techniques for weed segmentation

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

Unsupervised Domain Adaptation for Weed Segmentation

This repository contains the code to reproduce the experiments reported in the paper titled A comparative study of Fourier transform and CycleGAN as domain adaptation techniques for weed segmentation.

Scripts description

config: Directory containing configuration files to set the experiment's parameters. There is a configuration file for each python script. Configuration files are named as the relative python script, followed by the '_cfg' suffix. In the following, we describe the main parameters that are present in each configuration file:

  • pad = size of image padding
  • image_w = image width
  • image_h = image height
  • range_h and range_w: before extracting a random patch, the image gets resized into (image_w * range_w) * (image_h * range_h). Values of range_h and range_w should be different for each team to account for different camera zoom:
    • Bipbip team:
      • range_w: 3
      • range_h: 2
    • Weedelec team:
      • range_w: 4
      • range_h: 3
  • team = name of the team/robot which acquired the images
  • plant = kind of crop: {bean, maize}
  • rotate = whether images need to be rotated when evaluating on the validation set
  • weights = value can be None or a list of weights to have a weighted IoU soft loss
  • lr = learning rate

cycleGAN.py: Training of the architecture

dataset.py: Functions implementing different ways for extracting images for the training

discriminator.py: Implementation of the discriminator architecture

Fourier.py: Training using the FFT

gen_paper.py: Implementation of the ResNet generator architure as proposed in the original CycleGAN paper

generator.py: Our ResNet impelementation

predict.py: Baseline training where we train on source images to predict on target images

source_on_target.py: Evaluating a model on target images that has been trained on source images

training_utils.py: Functions used for training purposes

uNet.py: Implementation of the segmentation architecture based on U-Net

UNETgenerator.py: Implementation of the generator architecture based on U-Net

utils.py: Other functions used during the training