TuragaLab/DECODE

Loading the TIFF files

AnshulToshniwal opened this issue · 4 comments

Hello,
Before the model can generate the emitter set, the TIFF image stacks must be loaded in the python environment. The TIFF image stacks I am dealing are in total of 50 GB memory and hence unsuitable to be loaded simultaneously before localisation. Is there any way to generate the emitter se without loading the TIFF stack within the python environment?

Here is a simple loop for inferring the emitter set from multiple files:

em_list = []

for p in frame_paths:
   frames = decode.utils.frames_io.load_tif(p)
   em_list.append(infer.forward(frames))

emitter = decode.EmitterSet.cat(em_list, step_frame_ix=100)

step_frame_ix is the number of frames in every single file.

Yes, I have a loop for this issue but the bottleneck is the loading the TIFF file which takes most time. Is there any way to make predictions on the TIFF file without loading it in the python environment?

Dear @AnshulToshniwal,
I am guessing you are not talking about multiple TIFF files but rather a single multi-page large TIFF file?
We have prepared something which is however not yet thoroughly tested.

If you want to, try out using TIFF Tensor

frames = decode.utils.frame_io.TIFFTensor("path_to_your_large_tiff")

# ...

this will only put the portion of the tiff into memory that is actually needed. I haven't tried it for a while, but in principle it should work. Make sure to have the TIFF on a fast volume though (local SSD).
Let me know ;)

Thanks for your replies. I had multiple TIFF files within the directory with each TIFF file containing 4247 frames and is of size 4.3 GB, I will make sure to load the TIFF in a SSD