This repo contains the supported code of Attentive Symmetric Autoencoder. It is based on nnFormer.
We test our code in CUDA 10.2
and pytorch 1.8.1
git clone
cd ASA_Pretrain
pip install -r requirements.txt
cd ../ASA_Segmentation
pip install -e .
python3 -m torch.distributed.launch --nproc_per_node=2 --master_port 20003 tools/train.py --data_path DATA_PATH --output_dir OUTPUT_DIR
First Create Folder for raw data, preprocessed data and result folder
mkdir RAW_DATA_PATH
mkdir PREPROCESSED_DATA_PATH
mkdir RESULT_FOLDER_PATH
export nnFormer_raw_data_base=RAW_DATA_PATH
export nnFormer_preprocessed=PREPROCESSED_DATA_PATH
export RESULTS_FOLDER_nnFormer=RESULT_FOLDER_PATH
Download the BraTS Dataset from the Challenge.
Then change the dataset path in dataset_conversion/Task999_BraTS_2021.py
and run it to convert the dataset.
python dataset_conversion\Task999_BraTS_2021.py
After that, you can preprocess the above data using following commands:
nnFormer_plan_and_preprocess -t 999 --verify_dataset_integrity
Download the ASA_PRETRAIN_MODEL and change the PRETRAIN_PATH
in training\network_training\nnFormerTrainerV2_MEDIUMVIT_MAE.py
Then Finetuning the model
nnFormer_train --network 3d_fullres --network_trainer nnFormerTrainerV2_MEDIUMVIT --task 999 --fold 0 --tag DEFAULT
Testing
nnFormer_predict -i "DATA_RAW_PATH/nnFormer_raw_data/Task999_BraTS2021/imagesTs/" -o "OUTPUT_PATH" -t 999 --tag "DEBUG" -tr nnFormerTrainerV2_MEDIUMVIT_MAE
Model | config | Params |
---|---|---|
ASA_PRETRAIN | config | google drive |
ASA_SEGMENTATION | config | google drive |