OpenPIV/openpiv-python

Refactored version being less robust

ErichZimmer opened this issue · 16 comments

Ran out of internet time, so I'll finish the issue tomorrow

Describe the bug
When testing the new refactored version, it was found that it was less robust against particle image pair loss and large out of plain gradients. In the case of first PIV Challenge case A and case B, analysis of the images became much harder.
Here are the image data sets that failed testing;

  • first PIV Challenge case A (failed at vortex core)
  • first PIV Challenge case B (failed at vortex core)
  • OpenPIV Python test 4 (crashed computer three times in a row, five times in total)

passed testing:

  • first PIV Challenge (2001) case C, case E
  • second PIV Challenge (2003) case A (slightly degraded results)
  • third PIV Challenge (2005) case B, case C
  • fourth PIV Challenge (2014) case B, case F
  • all PIVlab test images (newspaper data set had slightly degraded results)
  • all OpenPIV python test images except test 4
  • all OpenPIV MATLAB test images (test 2 had degraded results)

To Reproduce
Note: The GUI used and windef script (executed in jupyter) had the same results
settings used:

  • first PIV Challenge (2001) case A:
    link to image set
    preprocessing: None
    correlation: FFT, no padding
    search area: None
    windowing: 64>32>16
    overlap: 32>16>8 (50%)
    validation: "disabled" by large values
    outlier replacement iteration: 5
    outlier replacement kernel: 2

  • first PIV Challenge (2001) case B:
    link to image set
    preprocessing: None
    correlation: FFT, no padding
    search area: None
    windowing: 64>32>16
    overlap: 32>16>8 (50%)
    validation: "disabled" by large values
    outlier replacement iteration: 5
    outlier replacement kernel: 2

  • OpenPIV python test 4
    preprocessing: None
    correlation: FFT, no padding
    search area: None
    windowing: 64>32>16
    overlap: 32>16>8 (50%)
    validation: "disabled" by large values
    outlier replacement iteration: 5
    outlier replacement kernel: 2

Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
For the first PIV Challenge (2001) case A and B, it was expected to see a partially to well defined vortex with a clear "eye" at the center of it. However, the refactored version did not capture this characteristic due to it's less robust nature.
PIV Challenge case A:
refactored version-

previous version-

PIV Challenge case B1:
refactored version-

previous version-

PIV Challenge case B2:
refactored version-

previous version-

Hardware

  • Windows 10 Professional
  • Google (browser)
  • Intel Pentium N3710 1.6 GHz
  • 4 GB DDR3 RAM
  • 64-bit operating system

Software

  • latest OpenPIV python version
  • python 3.8.5
  • miniconda (for virtual envs.)

Additional context
Add any other context about the problem here.

Dear @ErichZimmer - thanks a lot for the thorough testing. Please share with us the Dropbox/Google Drive/ any other shared way the two folders of your tests. Please include:

  1. the settings you use
  2. the explanation of use (order of operations)
  3. the results (text files and images) that show that it's "less robust".

I understand that you use the GUI, but we'll need to create a set of notebooks (for each case a notebook) that we can run twice on two versions without clicks or changes and compare the results. I do not know how to compare things with "less robust" effects.
We have changed many things and I was optimistic to see your previous message about 0.001% differences :) It is okay that we move on and release new things and we want to improve them. Just give us please a reproducible test case.

  • note that the new version does not have options for "padding", it's always padding by the construction.
  • note that the new version does not have a difference between "circular" and "linear" there is no "circular" basically now.

We can bring those options back to life with some changes to the speed (probably we need to go back to the loops) but in order to get convinced in the need - I shall run both tests under the same conditions to make sure that I know where the difference is.

I also cannot find a difference with the new version of the normalized correlation that @TKaeufer suggested. It's a mystery for me where things get wrong as the tests of the functions created show no difference in the correlation peak position. Probably something about the signal to noise ratio, but it takes quite some time to dig into the problem without clear comparison. I only know that it fails the tests.

We shall setup the set of comparative runs in the new repository : https://github.com/OpenPIV/test_robustness

Ran out of internet time, so I'll finish the issue tomorrow

Describe the bug
When testing the new refactored version, it was found that it was less robust against particle image pair loss and large out of plain gradients. In the case of first PIV Challenge case A and case B, analysis of the images became much harder.
Here are the image data sets that failed testing;

  • first PIV Challenge case A (failed at vortex core)
  • first PIV Challenge case B (failed at vortex core)
  • OpenPIV Python test 4 (crashed computer three times in a row, five times in total)

passed testing:

  • first PIV Challenge (2001) case C, case E
  • second PIV Challenge (2003) case A (slightly degraded results)
  • third PIV Challenge (2005) case B, case C
  • fourth PIV Challenge (2014) case B, case F
  • all PIVlab test images (newspaper data set had slightly degraded results)
  • all OpenPIV python test images except test 4
  • all OpenPIV MATLAB test images (test 2 had degraded results)

To Reproduce
Note: The GUI used and windef script (executed in jupyter) had the same results
settings used:

  • first PIV Challenge (2001) case A:
    link to image set
    preprocessing: None
    correlation: FFT, no padding
    search area: None
    windowing: 64>32>16
    overlap: 32>16>8 (50%)
    validation: "disabled" by large values
    outlier replacement iteration: 5
    outlier replacement kernel: 2
  • first PIV Challenge (2001) case B:
    link to image set
    preprocessing: None
    correlation: FFT, no padding
    search area: None
    windowing: 64>32>16
    overlap: 32>16>8 (50%)
    validation: "disabled" by large values
    outlier replacement iteration: 5
    outlier replacement kernel: 2
  • OpenPIV python test 4
    preprocessing: None
    correlation: FFT, no padding
    search area: None
    windowing: 64>32>16
    overlap: 32>16>8 (50%)
    validation: "disabled" by large values
    outlier replacement iteration: 5
    outlier replacement kernel: 2

Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
For the first PIV Challenge (2001) case A and B, it was expected to see a partially to well defined vortex with a clear "eye" at the center of it. However, the refactored version did not capture this characteristic due to it's less robust nature.
PIV Challenge case A:
refactored version-

previous version-

PIV Challenge case B1:
refactored version-

previous version-

PIV Challenge case B2:
refactored version-

previous version-

Hardware

  • Windows 10 Professional
  • Google (browser)
  • Intel Pentium N3710 1.6 GHz
  • 4 GB DDR3 RAM
  • 64-bit operating system

Software

  • latest OpenPIV python version
  • python 3.8.5
  • miniconda (for virtual envs.)

Additional context
Add any other context about the problem here.

@ErichZimmer could you please test your set (the way you do it, just one case for example) with the branch from @TKaeufer https://github.com/OpenPIV/openpiv-python/tree/normalized_correlation

I want to point out that I added a new method for the window deformation called 'symmetric'. This way, the deformation is split onto both frames. The way I implemented it is that the new default setting is now 'symmetric'. To use the old defomation, the deformation_method variable has to be explicitly set to 'second image'.

I also cannot find a difference with the new version of the normalized correlation that @TKaeufer suggested. It's a mystery for me where things get wrong as the tests of the functions created show no difference in the correlation peak position. Probably something about the signal to noise ratio, but it takes quite some time to dig into the problem without clear comparison. I only know that it fails the tests.

I will have deeper look at that.

We can bring those options back to life with some changes to the speed (probably we need to go back to the loops) but in order to get convinced in the need - I shall run both tests under the same conditions to make sure that I know where the difference is.

I quickly checked if I can get the circular cross-correlation working with the new setup and without many changes. But the result look wierd. I think we should implement the circulare cross-correlation again because it is significantly fast then the linear on since the cross correltaion matrices are just a quarter of the size. I dont know about the robustness but testing the circular cross correlation would be my best guess.

Yes please - circular cross-correlation would be faster. We just need to make sure people understand the difference and we also make some clear tests so users can get a feeling when it "works" and when it does not.

The problem is confirmed see OpenPIV/test_robustness#1 (comment)
We need a good plan to fix it. @TKaeufer we need your help here :)

It's just a good old thing to say: "More is not necessarily better"

  1. make normalize_intensity optional - or at least to be like in the old and good windef.py
  2. make circular correlation option
  3. check what else is broken now.

Ok, that isn't nice. Maybe the best way would be to withdraw the latest version and go back to the previous one, where things went better. Many people might not have updated yet. What do you think?

It's just a good old thing to say: "More is not necessarily better."

I totally agree with you. We might consider taking some options away and make using OpenPIV a little more straight forward. However, my thesis gets more time-consuming now; therefore, my time is more constrained.

We can bring those options back to life with some changes to the speed (probably we need to go back to the loops) but in order to get convinced in the need - I shall run both tests under the same conditions to make sure that I know where the difference is.

I quickly checked if I can get the circular cross-correlation working with the new setup and without many changes. But the result look wierd. I think we should implement the circulare cross-correlation again because it is significantly fast then the linear on since the cross correltaion matrices are just a quarter of the size. I dont know about the robustness but testing the circular cross correlation would be my best guess.

Please send me or push to your feature branch the new circular implementation. I am on it. If in a couple of days I cannot fix the clean version, we can always recommend to install

pip install openpiv==0.22.3

And it just works :)

@alexlib I committed my changes to the normalized_corrlation pull request I hope that's fine for you.

Thanks. I'll keep working on it. I'm now using the robustness test to get more into details of what is crucial for the correlations. Normalization seems to be less important but the main "robustness" is coming from the "circular" vs "linear" correlation. The high gradient regions seem to be resolved only without zero pads.

Dear Alex and Theo,

I would recommend keeping the previous version with addition to the refactored one. The reason why is because on some cases, the refactored version outperformed the original WinDef script with a 0.00062% deviation compared to 0.091% deviation from exact results (used high quality synthetic images and one experimental data set). However, I won't have access to a computer for a LONG time, so I won't be able to publish my results for comparisons.

Dear Alex and Theo,

I would recommend keeping the previous version with addition to the refactored one. The reason why is because on some cases, the refactored version outperformed the original WinDef script with a 0.00062% deviation compared to 0.091% deviation from exact results (used high-quality synthetic images and one experimental data set). However, I won't have access to a computer for a LONG time, so I won't be able to publish my results for comparisons.

Thanks, @ErichZimmer for the suggestion. We'll keep both or make sure that the refactored one does provide the functionality of the previous windef.py with certain parameters.

Dear Alex and Theo,

I would recommend keeping the previous version with addition to the refactored one. The reason why is because on some cases, the refactored version outperformed the original WinDef script with a 0.00062% deviation compared to 0.091% deviation from exact results (used high quality synthetic images and one experimental data set). However, I won't have access to a computer for a LONG time, so I won't be able to publish my results for comparisons.

I think we are on a good way now to improve the robustness with the circular cross-correlation. Otherwise, I would completely agree with you Erich.

this was solved in 0.23.4