Aboriginer/HFS-SDE

About the dataset

Closed this issue · 1 comments

Could you provide the preprocessed training and testing datasets of fastMRI used in your paper? The dataset is too large and difficult to preprocess. Your work is truly excellent.

Hi @rookie-Tim-Chen, thanks for your interest in our work.

In our final version, we used the entire fastMRI multi-coil knee dataset (consisting of 973 individuals) and provided pre-trained weights. If you wish to reproduce our training process, you need to download the whole training set. Additionally, we offer guidance on generating indices for the entire dataset in save_data_slice.py to facilitate data loading.

Previously, we tried to train using a smaller subset of data (selecting 34 individuals from the validation set, T1 modality, approximately 1000 images). Our experience suggests that this data volume can also achieve satisfactory results, although we did not conduct further experiments on its generalizability.

For testing, we selected knee data from three individuals (file1000046T1.h5, file1000048T1.h5, and file1000049T1.h5) and brain data from one individual (file_brain_AXT2_200_2000019.h5), all chosen randomly from the validation set.

Due to permission issues with fastMRI Dataset, we cannot upload it here. You can apply to download it on the official website of fastMRI.