rg314/pytraction

POC testing

Opened this issue · 11 comments

rg314 commented

Andrea and Aki to start testing the script and contacting former members. To check data.

rg314 commented

If you have any issues using the package or installation please comment on this issue @andimi and @AkiStubb :)

rg314 commented

Also forgot to to mention if anything is even remotely annoying, slow, or unclear let me know I'll either fix it or try to

Just checking that the elastic modulus is correct in the usage notebook? It says 100 Pa in certain cases, just wanted to check that we are sure it's E and not G'?

rg314 commented

Yes, it should be E and not G'. I believe that I'll need to do the conversion... I've not looked at AFM data yet which is in K but needs to be converted as well. This will need correction for my samples. I didn't check this for Example 2 or Example 3. Thank you for reminding me about this!

Is there a limit in terms of channels in a TIF stack? Also, do beads always need to be in the 1st channel?
We could add this information to the usage notebook

rg314 commented

There's no limit on the number of channels but the user must specify the bead channel. It's worth testing the edge case here. We should try and put in an image of shape (1,3,100,100) and see if/what error message we get.

TractionForce could have a save_data() method to save the results of the analysis in a TIF file?

rg314 commented

yeah, that's also a good idea but could be annoying that the results need to be cached somewhere in between. It's possible to write them to the tmp dir of the os but then need an option to delete on termination. I think the final output should be in an hdf5 file type see #18. At the moment you can do a log.to_csv() to this is not the best format to store the data in....

The bonus of a TIF is that it's easyish to read depending on what we save to it.

However, different users might want to access different parameters for example force field or x, y, u, v. Furthermore, they might want want to do post-processing or combine outputs. For that reason, I think that hdf5 might be a strong option and I could write a simple parser that recovers the target field that we can clearly document. This also has the advantage of reading large files fast.

Please let me know your thoughts :)

yeah, that's also a good idea but could be annoying that the results need to be cached somewhere in between. It's possible to write them to the tmp dir of the os but then need an option to delete on termination. I think the final output should be in an hdf5 file type see #18. At the moment you can do a log.to_csv() to this is not the best format to store the data in....

The bonus of a TIF is that it's easyish to read depending on what we save to it.

However, different users might want to access different parameters for example force field or x, y, u, v. Furthermore, they might want want to do post-processing or combine outputs. For that reason, I think that hdf5 might be a strong option and I could write a simple parser that recovers the target field that we can clearly document. This also has the advantage of reading large files fast.

Please let me know your thoughts :)

All of this makes sense, and it would be good to have an efficient file format for saving. But I also think that a simple way to create an image file with the results would make it easier to include in figures, slides, etc. Maybe utils.plot() could have a "save" parameter that saves the plot to disc, or we can add a line to the tutorial to explain how to save the plots?

On a separate note, I am getting an issue when trying to run the segmentation - just posting it here before opening a new issue:
Steps:
Creating a traction_object
traction_obj2 = TractionForce(pix_per_um, E=E, segment=True, window_size=64)
And running process_stack as
log2 = traction_obj2.process_stack(img, ref, verbose=1)
Gives this error:
RuntimeError: Sizes of tensors must match except in dimension 1. Got 98 and 99 in dimension 3 (The offending index is 1)
Which is very weird because these are the dimensions:
img.shape, ref.shape
((10, 2, 1088, 1590), (2, 1088, 1590))

This issue doesn't happen when using
traction_obj2 = TractionForce(pix_per_um, E=E, segment=False, window_size=64)

Any quick ideas or shall I open a new issue?

rg314 commented

The pipeline is falling back onto a CNN to calculate the ROI for the cell. Therefore, the input image is being modified in the background. Could you create a new issue and run in debug mode in vs code and get a screenshot of the input image and output segmented image?

This is another thing that will need a good error message because I don't want to be promising that the segmentation is also perfect.... more of an experimental feature.