QI2lab/OPM

issue with orthogonal interpolation code

pr4deepr opened this issue · 12 comments

Hi @dpshepherd

We've been using the orthogonal interpolation code you referred to as an opencl kernel in pyclesperanto_prototype and its been working quite well. Recently, we came across some issues and would like your help troubleshooting this: clEsperanto/pyclesperanto_prototype#278

@oleksiievetsno have a custom setup where they acquire images of a GUV (Giant unilamellar vesicle), which has a spherical shape, so final image should be roughly spherical after deskewing. But, after deskewing it doesn't appear spherical.

image

I figured out that the scan direction was bottom to top, so we had to flip the image before deskewing.
However, issue still persists.

Raw data: timepoint zero link to download

I think its a nice dataset to benchmark the code.

I've included the code that I've been troubleshooting:

voxel_size_x_in_microns =0.1773018
voxel_size_z_in_microns = 0.5
deskewing_angle_in_degrees = 49

from numba import njit, prange
from skimage.io import imread
import numpy as np 


@njit(parallel=True)
def opm_deskew(data,theta,distance,pixel_size):
    """
    Perform parallelized orthogonal interpolation into a uniform pixel size grid.
    
    :param data: ndarray
        image stack of uniformly spaced OPM planes
    :param theta: float 
        angle relative to coverslip
    :param distance: float 
        step between image planes along coverslip
    :param pizel_size: float 
        in-plane camera pixel size in OPM coordinates
    :return output: ndarray
        image stack of deskewed OPM planes on uniform grid
    """

    # unwrap parameters 
    [num_images,ny,nx]=data.shape     # (pixels)

    # change step size from physical space (nm) to camera space (pixels)
    pixel_step = distance/pixel_size    # (pixels)

    # calculate the number of pixels scanned during stage scan 
    scan_end = num_images * pixel_step  # (pixels)

    # calculate properties for final image
    final_ny = np.int64(np.ceil(scan_end+ny*np.cos(theta*np.pi/180))) # (pixels)
    final_nz = np.int64(np.ceil(ny*np.sin(theta*np.pi/180)))          # (pixels)
    final_nx = np.int64(nx)                                           # (pixels)

    # create final image
    output = np.zeros((final_nz, final_ny, final_nx),dtype=np.float32)  # (time, pixels,pixels,pixels - data is float32)

    # precalculate trig functions for scan angle
    tantheta = np.float32(np.tan(theta * np.pi/180)) # (float32)
    sintheta = np.float32(np.sin(theta * np.pi/180)) # (float32)
    costheta = np.float32(np.cos(theta * np.pi/180)) # (float32)

    # perform orthogonal interpolation

    # loop through output z planes
    # defined as parallel loop in numba
    # http://numba.pydata.org/numba-doc/latest/user/parallel.html#numba-parallel
    for z in prange(0,final_nz):
        # calculate range of output y pixels to populate
        y_range_min=np.minimum(0,np.int64(np.floor(np.float32(z)/tantheta)))
        y_range_max=np.maximum(final_ny,np.int64(np.ceil(scan_end+np.float32(z)/tantheta+1)))

        # loop through final y pixels
        # defined as parallel loop in numba
        # http://numba.pydata.org/numba-doc/latest/user/parallel.html#numba-parallel
        for y in prange(y_range_min,y_range_max):

            # find the virtual tilted plane that intersects the interpolated plane 
            virtual_plane = y - z/tantheta

            # find raw data planes that surround the virtual plane
            plane_before = np.int64(np.floor(virtual_plane/pixel_step))
            plane_after = np.int64(plane_before+1)

            # continue if raw data planes are within the data range
            if ((plane_before>=0) and (plane_after<num_images)):
                
                # find distance of a point on the  interpolated plane to plane_before and plane_after
                l_before = virtual_plane - plane_before * pixel_step
                l_after = pixel_step - l_before
                
                # determine location of a point along the interpolated plane
                za = z/sintheta
                virtual_pos_before = za + l_before*costheta
                virtual_pos_after = za - l_after*costheta

                # determine nearest data points to interpoloated point in raw data
                pos_before = np.int64(np.floor(virtual_pos_before))
                pos_after = np.int64(np.floor(virtual_pos_after))

                # continue if within data bounds
                if ((pos_before>=0) and (pos_after >= 0) and (pos_before<ny-1) and (pos_after<ny-1)):
                    
                    # determine points surrounding interpolated point on the virtual plane 
                    dz_before = virtual_pos_before - pos_before
                    dz_after = virtual_pos_after - pos_after

                    # compute final image plane using orthogonal interpolation
                    output[z,y,:] = (l_before * dz_after * data[plane_after,pos_after+1,:] +
                                    l_before * (1-dz_after) * data[plane_after,pos_after,:] +
                                    l_after * dz_before * data[plane_before,pos_before+1,:] +
                                    l_after * (1-dz_before) * data[plane_before,pos_before,:]) /pixel_step


    # return output
    return output


#timepoint zero
img_path = "../nazar_deskew_GUV/GUV_raw_t0.tif"
img_t0 = imread(img_path)

voxel_size_x_in_microns =0.1773018
voxel_size_z_in_microns = 0.5
deskewing_angle_in_degrees = 49

#scan direction is bottom to top so flip image in y
rot_img = np.flip(img_t0,axis=1)

deskewed = opm_deskew(rot_img,
                          deskewing_angle_in_degrees,
                          voxel_size_z_in_microns,
                          voxel_size_x_in_microns
                          )

#function to plot XY, XZ and YZ viewer of 3D image
shp = deskewed.shape

import matplotlib.pyplot as plt
fig,axes = plt.subplots(1,3)

axes[0].imshow(deskewed[int(shp[0]/2)],cmap='gray')
axes[1].imshow(deskewed[:,int(shp[1]/2)],cmap='gray')
axes[2].imshow(deskewed[:,:,int(shp[2]/2)],cmap='gray')

axes[0].set_title("XY")
axes[1].set_title("XZ")
axes[2].set_title("YZ")

As a temporary fix, I've recommended this code: clEsperanto/pyclesperanto_prototype#278 (comment)

Thanks

Pradeep

Hi @pr4deepr -

Sure, happy to take a look. A few initial questions:

  1. What type of microscope is this? OPM geometry or two objectives?
  2. How is the scan step calibrated for the microscope?
  3. Is the data acquired with a stage scan or mirror scan?
  4. How is the light sheet angle calibrated?

Also - how was the in-plane camera pixel size calibrated to yield voxel_size_x_in_microns =0.1773018?

Because our approach builds an isotropic grid and interpolates, if the spacing or angles are off, the deskewed data will be as well.

The clear anistropy in the top down view for the orthogonal interpolation deskewed data is a good hint that our code may be calculating the deskewed pixel sizes incorrectly or the calibration is slightly off.

Thanks!

Hi @dpshepherd

Thanks a lot @dpshepherd .
I've asked @oleksiievetsno to help answer your questions as they have the hardware.

Its good to know what things need to be verified with the hardware geometry and calibration.

Cheers
Pradeep

Hi @dpshepherd, hi @pr4deepr,
Sorry for delayed reply. I just wanted to confirm a few things with the system manufacturer.

  1. This is ASI SCAPE system.
  2. For stage scan, the calibration is taken care of within the plugin, though it uses the light sheet angle in this calculation so that has to be input correctly in order for this to perform correctly.
  3. In this case it is mirror scan.
  4. To get the angle of the light sheet, an Argolight slide was used. A stack with the piezo at O3 was taken, then a maximum intensity projection was performed, and the spacing between features was measured. With this measurement and the known spacing it is just a simple trigonometric relation to determine the angle.

The pixel size was calibrated with the grid slide.

Best,
Nazar

Hi @oleksiievetsno -

Thanks. A few follow-up questions:

  • How do you calibrate the mirror scan spacing? There should be a voltage applied to micron conversion that is determined and then double-checked using a calibration sample.
  • How did you determine the in-plane pixel size with 7 digits of precision using a grid?
  • Do you have documentation on the rationale behind the light sheet angle calibration you are using? The angle of the light sheet can be off relative the O3 angle. One can use wide-field illumination and the strategy you have described to get the O3 tilt angle - but that does not guarantee the light sheet is well aligned and co-planar with the O3 detection plane.

Our code is regularly successfully used with a number of homebuilt OPM and SCAPEs, so I'm trying to figure out what the failure mode is here.

It would be useful to get a mirror sweep of the Argolight slide if you can share it.

Thanks,
Doug

hi @pr4deepr and @oleksiievetsno -

Is this still of interest? I'm very happy to dig into why this occurred with your system if you can provide the calibration data.

Thanks.

Hi @dpshepherd

Thanks for all your feedback and looking into this.
Hope everything is ok @oleksiievetsno.

@dpshepherd , if we don't hear back soon, feel free to close this issue and we can always reopen it if needed.

Cheers
Pradeep

Hi @pr4deepr, Hi @dpshepherd

I am sorry for the delay with the feedback.
I will get back with replies and additional data as soon as possible.

Best,
Nazar

Hi @pr4deepr, hi @dpshepherd

Here are some of the answers:
-How do you calibrate the mirror scan spacing? There should be a voltage applied to micron conversion that is determined and then double-checked using a calibration sample.
I am attaching a few slides with some notes on the calculation and verification of the calibration parameter provided by ASI (https://cloud.mpi-cbg.de/index.php/s/KJhfzH4ihGL89j5). It depends on the galvo calibration given in the spec sheet by the manufacturer, the control electronics, and the optical components. With this we find that that input to the galvo of 1000 units (denoted as 1 degree in the diSPIM plugin) corresponds to ~70um lateral translation at the sample. This was verified with the Argolight slide. This factor is then used to set the "piezo-slice calibration" parameter in the plugin, with an added geometric factor to get the slice spacing (perpendicular to the sheet).

-How did you determine the in-plane pixel size with 7 digits of precision using a grid?
Never mind, I have just used the grid and didn't round up the number.

-Do you have documentation on the rationale behind the light sheet angle calibration you are using? The angle of the light sheet can be off relative the O3 angle. One can use wide-field illumination and the strategy you have described to get the O3 tilt angle - but that does not guarantee the light sheet is well aligned and co-planar with the O3 detection plane.
You are right. As far as ensuring the light sheet angle is well aligned to the O3 rotation angle, I suspect that the best feedback is given by image quality of a bead sample. I am more or less sure that we have correct light sheet angle because when we are doing only shearing the structures look fine.

Here is Argolight sweep: https://cloud.mpi-cbg.de/index.php/s/rG1P96hls3B3i32

Thank you for your help and support! And once again, sorry for the delayed reply.

Best,
Nazar

Hi @oleksiievetsno,

This is very helpful and I believe I know where the discrepancy is coming from. The parameters you are giving our code are not what the function docstring requests.

  1. Our code uses the lateral displacement on the coverslip in between scan planes, not the slice spacing you defined (factor of cosine difference)
  2. Our code expects the tilt angle of O3, not the complement angle.

If I estimate the correction for these parameters based on what you sent me, I get something much closer, see e.g.

image

I suppose these definitions are coming from ASI? It would be good to clarify the nomenclature, especially as it may be slightly different from what many people working on OPM are using.

We can think about how to modify our function to accommodate different coordinate systems, but I would prefer that we instead clarify the docstrings so that the function is used the way we wrote it.

Hi @pr4deepr and @oleksiievetsno ,

I'm going to close this given the details above. Please feel free to open a new issue to discuss:

  1. modifying the orthogonal interpolation approach to work with a different coordinate system (such as the one you were trying to use).

  2. suggestions to improve the existing docstrings to clarifying the algorithm input parameters.

Or

  1. Reopen this issue if you think there is still a problem with the algorithm.

Thanks!
Doug

Thanks heaps for the help @dpshepherd . It's much appreciated.