MattRosenLab/AUTOMAP

Scaling Factor for actual k-space raw data and things to consider while using AUTOMAP for actual k-space reconstructions

Closed this issue · 4 comments

Hello,

I had some questions with regards to the methods used to generate images 4d and 4e. The AUTOMAP paper mentions that model trained on phase modulated HCP data was used on actual raw k-space data. The steps followed are something like:

  • Normalize HCP images in image space (maybe in range 0-1)
  • Phase modulate the HCP images
  • Take the FFT of the images
  • Separate the Real and Imaginary channels and multiply them by a scaling factor αtrain, where αtrain<1
  • Train the model on these scaled fft values.

This works if you use it to predict on HCP phase-modulated k-space data. When this model is used to predict on actual fully sampled cartesian raw k-space data from the MR scanners, I have 2 concerns.

  • The range of values of the real and imaginary parts of the actual k-space (from the scanner) are quite large. Something like [~-31k to 78k] for real part and a slightly different range for the imaginary part (from a raw fully sampled k-space data that I have). How do you determine what scaling factor to use during prediction? Did you scale the channels independently (i.e.αreal for the real part and αimag for the imaginary part with αreal $\neq$ αimag) or did you scale them with a common αtest, where αtest $\neq$ αtrain?
  • The distribution of values in the real and imaginary parts of actual k-space are probably different from the distribution of FFT values of HCP images used for training. Is it advisable to use AUTOMAP in such cases?

Please let me know if there is something else that we should consider. I would highly appreciate your feedbacks/suggestions. ✌️

Hi,

Here are steps we typically follow for training a 128 x 128 complex-valued network :

  1. Extract HCP data (magnitude data only) in all three anatomical orientations. Each Image data is resized to 128 x 128. Range of values are kept the same as the HCP database.
  2. Augmentation of the 2D image (flip up down, flip left right, rotations and translations)
  3. Add phase modulations on the magnitude data. The phase values go from -3.1 to +3.1. Hence, we get a complex-valued 2D image; say variable ‘full_image’.
  4. ‘full_image’ is then FFT into its corresponding kspace data; say variable ‘fft_space’.
    ‘5. full_image’ is flattened into [1 x 16384] and then split into real and imaginary array = [real(full_image) imag(full_image)]
    ‘6. fft_space’ is also flattened into [1 x 16384] and then split into real and imaginary array = [real(fft_space) imag(fft_space)]
  5. Steps 1-6 loops over 51000 HCP images. Vector of [32678 x 51000] for train_full_image and train_fft_space.
  6. Then train_full_image ( then entire set of images) is normalized so that the values range between -0.625 and +0.625, i.e (min(train_full_image) =-0.625 and max(train_full_image) = +0.625.
  7. train_fft_space ( then entire set of k-space) is normalized so that the values range between -1.5 and +1.5

Training is done with input to network train_fft_space and output to network is either the real or the imaginary part of train_full_image.

Once the network is trained here are the steps, we typically follow for inference of a 128 x 128 complex-valued 2D k-space from scanner :

  1. scanner_kspace (complex-valued) is then IFFT into its image,
  2. scanner_image reshape it to [1 128x128] and concatenate [real(scanner_image) imag(scanner_image)] = test_scanner_image
  3. scanner_kspace reshape it to [1 128x128] and concatenate [real(scanner_kspace) imag(scanner_kspace)] = test_scanner_kspace
  4. For the scaling, multiply test_scanner_kspace by a factor so that the min and max is within +/-1.5. We typically do around 0.8. But you can try fiddling around to see the results and the noise level.

To get the magnitude and phase images in to paper (4d and 4e), the real part of the output and the imaginary part of the output are put together as real + 1i*imaginary. Then the magnitude and phase are obtained by taking the ‘abs’ and ‘angle’ functions from MATLAB.

Hope this answers your questions!

Hi @danyalb ,

Thanks a lot for your valuable insights 😃. Your comments made some things very clear for me. I thought that the authors were using some formula to calculate the scaling factor. But it seems to be empirically determined.

Just one question, when you say that you normalize the entire train_fft_space (of size [32678 x 51000]) to the range [-1.5, 1.5], do you calculate:
αfull as $1.5/(max(abs($ train_fft_space $)))$ and multiply train_fft_space by αfull

or is it like

αi as $1.5/(max(abs($ train_fft_space[:, i] $)))$ for every vector of size [32678 x 1] and repeat it 51000 times, multiplying each vector by its separate αi?

In both the approaches, (min(train_fft_space) >=-1.5 and max(train_fft_space) <= +1.5.

Thanks!

You should scale at the dataset level, not each image by image. And you do not need exact min and max to be that but as long as they are in that range.

hope that helps!

Thanks @danyalb 😄 for your time and explanations. I hope this discussion helps other people who would like to learn about AUTOMAP as well 👍 . I will close this issue now.