kaanaksit/odak

Gerchberg-Saxton phase retrieval method

kaanaksit opened this issue ยท 11 comments

This issue will track the work on migrating code from @askaradeniz that implements Gerchberg-Saxton to Odak. There are two veins to this migration. One of them is migrating the code in a way that is suitable to work with Numpy and Cupy. The second deals with the torch implementation, which I believe @askaradeniz can immediatly initiate as his code is already applicable to torch case.

  • Numpy/Cupy case will be hosted in odak/wave/classical.py,
  • Torch case will be hosed in odak/learn/classical.py,

We will also add test cases to test folder for both method.

Gerchberg-Saxton phase retrieval method for Numpy/Cupy case is added with commit d2c4e3f .

A test routine can be found as in here. @askaradeniz please do not start conversion to torch until I verify this routine with a real holography setup.

This routine is verified with a real holography setup. @askaradeniz in case you are interested in transferring this piece of code to learn module, the Numpy/Cupy version is ready.

I converted the current code of gerchberg-saxton method to torch with (f5e0a16).

The results may not match exactly because of the differences we noticed in #10 but they seem close to each other. These are the current reconstruction results from the numpy/cupy and torch versions with (edc9872):

numpy/cupy:
output_amplitude

torch:
output_amplitude_torch

I suppose this concludes and closes this case.

In fact, we may be able to overcome that tiny difference in results by comparing:

  • odak.wave.set_amplitude and odak.learn.set_amplitude ,
  • fftn and ifftn in torch with respect to fft2 and ifft in numpy, cupy.

I should also highlight that when @rongduo experimented the absolute maximum difference was 10, in her case she uses numpy. In my case it was 15, I use cupy. At the very least above two comparisons may help us understand further. Shall we initiate and examine those two at a separate issue @askaradeniz ? Would you be willing to take the lead on that?

I suspect that the difference in absolute distance you see is due to the randomization of the input field.
https://github.com/kunguz/odak/blob/edc987256afe6bfad16aae4031e047a717999b60/test/test_learn_beam_propagation.py#L73

Ofcourse, I can take the lead about the matching issue.

Makes perfect sense. Do we get the same results without it? If so no additional issue is needed, all we need to do is to comment that line.

But wait, I thought torch and numpy comparison uses the same original field, no?
https://github.com/kunguz/odak/blob/edc987256afe6bfad16aae4031e047a717999b60/test/test_learn_beam_propagation.py#L79

I mean they can give different absolute difference everytime we run the test case because of the randomization. So, it is normal to have different absolute differences at each run. However, our problem is that 10 or 15 difference should be much smaller as both of them use the same field.

Maybe we can just leave it as is and reopen the issue when someone needs more precise matching. @kunguz Would it be OK?

Sure, but we don't have an understanding at the moment on where does the difference come from. U1 returns same for example but right after final fft2 results diverges. Analysing set_amplitude should be straight forward.

Well, actually, I think even if we don't fix it right now, having an issue is a reminder for the future.