This is the offical code for the paper Cascaded Dilated Dense Network with Two-step Data Consistency for MRI Reconstruction published in NeruIPS 2019.
Here is the Link.
- Python==3.6.5
- numpy==1.14.3
- opencv-python==3.4.1.15
- scipy==1.1.0
- pytorch==1.0.1.post2
- matplotlib==2.2.2
- Prepare data.
- Create an Initialization File in
config/
, named likeCONFIGNAME.ini
. You can also make a copy ofdefault.ini
and edit it. - run
python main.py CONFIGNAME
. Notice thatconfig/
and.ini
will be added automatically.
More details can be found below.
The original data is established from the work Alexander et al. Details can be found in the paper. You can download the original data from Here.
We convert the data into .png
format. The png-format can be found from Here, and the convert code is Here.
It seems that the convert code will generate different result with current png data. Sorry but I have no idea. Just use converted data if you have no idea as well.
Although there are 4480 frames, we only use 3300 frames(100 frames/patient). Preparing for training, you should:
- Download the png-format data.
- Put the data in
./data/cardiac_ktz/
.
In our training process, we pre-generate a quantity of random sampling masks in the mask/
, named like mask_rAMOUT_SAMPLINGRATE.mat
. These masks will be applied in the constructor of dataset.
- NetType: The network for MRI reconstruction. All the options can be found in the function
getNet
.CN_dOri_c5_complex_tr_trick2
is the proposed method. - UseCuda: use
True
for cuda - NeedParallel: use
True
if you want to train with multi gpu devices. We recommend to chooseTrue
even if only one devices is available. - Device: 1 for use and 0 for not. E.g., you want to use the 2nd and the 3rd gpu devices , you should write
0110
here. (more or less devices is acceptable) - LossType: The loss function.
mse
ormae
. (Actually we found no difference in this work) - DataType: The part is implemented by "Keyword detection". Check the function
getDataloader
for details.1in1_complex_random
is the default choice in the paper. - CrossValid: It is only used for cross-valid. Fill in a integer in [0, 10]. Notice it is not available for fastMRI dataset.
- Mode: Abandoned. Only
inNetDC
is acceptable here. - Path: The saving path for the record and the trained weights.
- BatchSize: Batch-size.
- LearningRate: Learning rate.
- Epoch: Epoch. What am I doing.
- Optimizer: Check
getOptimizer
. - WeightDecay: It only work if you use
Adam_wd
in Optimizer above. RememberAdam_DC_DCNN
andAdam_RDN
will use the pre-defined weight decay.
- SaveEpoch: The result will be logged and saved per
SaveEpoch
epoches. - MaxSaved: Only last
MaxSaved
weights will be reversed. Earlier ones will be removed automatically.
Actucally, we implemented this part long time ago for training with trained record but never use it. So we DON'T promise it can work now.
Use the function loadCkpt
instead if you want to load the record.
For example:
c1 = core.core('PATH_TO_RESULT/config.ini', True) # True for not loading training dataset.
c1.loadCkpt(1000, True) # True for checked weight.
Notice:
FastMRI result didn't reached 1000 epoch as the network convergence within 300 epoch,
so use c1.loadCkpt(300, False)
instead if necessary.
Notice that the final result will be saved permanently with additional CHECKED_
prefix, so set True
in the second parameter of loadCkpt().
- Donwload record folder from Here.
- Put the folder in
result/
if necessary. - use
c = core.core('FOLDER/config.ini', True)
to create a core instance with record configuration. - use
c.loadCkpt(1000, True)
to load trained record. - use
result = c.validation()
to evaluate trained model. (remember preparing the dataset at first)