/AGNet

The code of "Attention Guided Network for Retinal Image Segmentation" in MICCAI 2019

Primary LanguagePython

Attention Guided Network for Retinal Image Segmentation (AG-Net)

The code of "Attention Guided Network for Retinal Image Segmentation" in MICCAI 2019.

  • The code is based on: Python 2.7 + pytorch 0.4.0.
  • You can run <AG_Net_path>/code/test.py for testing any new image directly.
  • You can run <AG_Net_path>/code/main.py for training a new model.

Quick usage on your data:

  • Put your desired file in "<AG_Net_path>/data/<your_file_name>".
  • Put the images in "<AG_Net_path>/data/<your_file_name>/images".
  • Put the labels in "<AG_Net_path>/data/<your_file_name>/label".
  • Divide data into training and test data, and store the image name in the "train_dict.pkl" file. (We provide a 'train_dict.pkl' sample for DRIVE dataset)
  • The "train_dict.pkl" should contains two dictionary: 'train_list' and 'test_list'.

Train your model with:

python main.py --data_path '../data/your_file_name'

###Reference

  1. S. Zhang, H. Fu, Y. Yan, Y. Zhang, Q. Wu, M. Yang, M. Tan, Y. Xu, "Attention Guided Network for Retinal Image Segmentation," in MICCAI, 2019. [PDF]
  2. H. Fu, J. Cheng, Y. Xu, D. W. K. Wong, J. Liu, and X. Cao, “Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation,” IEEE Trans. Med. Imaging, vol. 37, no. 7, pp. 1597–1605, 2018.