/CASED-Tensorflow

Simple Tensorflow implementation of Curriculum Adaptive Sampling for Extreme Data Imbalance with multi GPU using LUNA16 (MICCAI 2017) / LUNA16 Tutorial

Primary LanguagePythonMIT LicenseMIT

CASED-Tensorflow

Tensorflow implementation of Curriculum Adaptive Sampling for Extreme Data Imbalance with multi GPU using LUNA16

Preprocessing Tutorial

 > all_in_one.py = convert_luna_to_npy + create_patch

Usage for preprocessing

> python all_in_one.py
  • Check src_root and save_path

Usage for train

> python main_train.py
  • See main_train.py for other arguments.

Usage for test

> python main_test.py

Issue

  • The hyper-parameter information is not listed in the paper, so I'm still testing it.
  • Use Snapshot Ensemble (M=10, init_lr=0.1)
  • Or Fix learning rate 0.01

snapshot

def Snapshot(t, T, M, alpha_zero) :
    """
    t = # of current iteration
    T = # of total iteration
    M = # of snapshot
    alpha_zero = init learning rate
    """

    x = (np.pi * (t % (T // M))) / (T // M)
    x = np.cos(x) + 1

    lr = (alpha_zero / 2) * x

    return lr

Summary

Preprocessing

  • Resample
> 1.25mm
  • Hounsfield
> minHU = -1000
> maxHU = 400
  • Zero centering
> Pixel Mean = 0.25

Data augmentation

If you want to do augmentation, see this link

  • Affine rotate
-2 to 2 degree
  • Scale
0.9 to 1.1

Network Architecture

network

Algorithm

framework

p_x = 1.0

for i in iteration :
    p = uniform(0,1)
    
    if p <= p_x :
        g_n_index = np.random.choice(N, size=batch_size, replace=False)
        batch_patch = nodule_patch[g_n_index]
        batch_y = nodule_patch_y[g_n_index]
    
    else :
        predictor_dict = Predictor(all_patch) # key = index, value = loss
        g_r_index = nlargest(batch_size, predictor_dict, key=predictor_dict.get)
        
        batch_patch = all_patch[g_r_index]
        batch_y = all_patch_y[g_r_index]
    
    p_x *= pow(1/M, 1/iteration)

Result

result2

Author

Junho Kim / @Lunit