/mmsegmentation

OpenMMLab Semantic Segmentation Toolbox and Benchmark.

Primary LanguagePythonApache License 2.0Apache-2.0


Introduction

This is our setup in the challenge FLARE2021. This work is inherited directly from the MMSegmentation Repo

We exploit the robustness of HRNet + OCRNet in our pipeline. The slices of 3D image are splitted into 2D images and then they are fed to the trained model. Also, keeping in mind that the running speed and memory cost also matter, we convert the model into Torch Script format.

Intalling

pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
pip install nibabel
pip install Pillow
pip install -U scikit-learn
pip install -e .

Inference

Firstly, you need to download our pretrained model from Google Drive and put them anywhere.

In order to reproduce the result, run the following

python script/inference.py --model MODEL_PATH --input INPUT --output OUTPUT 

You will need to adapt the corresponding path to your TorchScript model, your folder that contains .nii files and folder that are supposed to contain the .nii output.

Note: We have made our Docker public in DockerHub. This Docker image can also be used to reproduce the inference result.

Pulling from DockerHub

docker pull quoccuongcs/uit_vnu

Create a container and run inference with

docker container run --gpus "device=1" --name uit_vnu --rm -v $PWD/inputs/:/workspace/inputs/ -v $PWD/TeamName_outputs/:/workspace/outputs/ quoccuongcs/uit_vnu:latest /bin/bash -c "sh predict.sh"

Train

Prepare data

Suppose you have a dataset whose format are .nii files, you need to convert them to 2D images. We have provided our structured FLARE2021 dataset here.

After downloading the zip file above, as well as unzipping it into data folder, you should have something like following:

data
│   train.txt
│   val.txt    
│
└───separated_img
│   │   001_0000.png
│   │   ...
│   
└───separated_mask
│   │   001_0000.png
│   │   ...
│
│___ ...

Run training

You also need to modify the script/config.py as follows:

  • Line 43 represents image's path
  • Line 44 represents label mask's path
  • Line 77 represents data folder's path
  • Line 141 represents path where the model will be saved. The current file has been hard-coded for your convenience.

Finally, you can run the training script:

python script/train.py 

To convert trained models to Torch Script format, use the following command with the corresponding paths

python script/create_torchscript.py --in_model INPUT_MODEL --out_model OUTPUT_MODEL

In the command above, INPUT_MODEL is the path to .pth file and OUTPUT_MODEL is the path to .pt file. For example: python script/create_torchscript.py --in_model ./weight/latest.pth --out_model ./weight/latest.pt

Our Result for FLARE 2021 challenge