neurolabusc/MRIcro

CLI mode existing?

Closed this issue · 8 comments

Is there a CLI mode to create 3d renderings?

I really love the way how MRIcro creates good looking renderings within seconds.

I would like to create thousands of renderings in an automated fashion without GUI interaction and embed the results on a website. Png snapshots would be sufficient for my usecase even though animated videos or even a interactively rotatable model would be best.

You will want to get MRIcroGL. It has a full scripting language - you can call it from the command line and use the savebmp() function to save images for you and then call quit() to have the program terminate. Examples are provided for control using Matlab and Python, but any scripting that can call an external application will work. Note that the program will actually launch and run your script using the display: that provides the OpenGL context.

You may also want to look at Surfice which provides a similar scripting language for a surface renderer (where MRIcro and MRIcroGL are volume renderers).

Cool, thank you for the response! Is there a way to run this in a docker container (ubuntu based) running on a computer without a GPU?

This will depend a lot on your setup. The code is really designed to leverage a modern GPU. It requires OpenGL 2.1 and runs nicely on Intel, AMD and NVidia GPUs. It might render in software using a recent version of Mesa, but I have not tested this. I would first see if you can blender to do what you want - Blender has similar GPU demands and a much larger community. Once you get Blender running, my software should run fine. Here are a few comments:
http://www.mccauslandcenter.sc.edu/mricrogl/troubleshooting

@neurolabusc Thanks a lot for your help Chris. I now managed to extract .png files from multiple perspectives.

Is there a way to reduce the resolution of the files other than?
CAMERADISTANCE(1.5);

I can now combine my PNGs to generate animated videos in the style of this this one:
https://mollermara.com/blog/blender-brain/blender-brain.webm

Is there a way to create such an animation directly out of MRIcroGL?

Or even better is there a way I can export the renders that they can be explored in an interactive viewer embedded in the web browser, similar to this one here?
https://neurovault.org/media/images/3714/pycortex_all/index.html

Thanks a lot for your time and effort!

There are a lot of tools that will allow you to scale your PNGs and convert them to videos. This page. Unlike that author, I have had good success with ImageMagick, and it can rescale your images and convert them to a movie with a single call. A quick web search will provide several other tools, depending on your preferred operating system.

The pycortex example you link is based on a surface mesh, not volume rendering. You can convert your voxelwise data to a mesh using marching cubes or a similar algorithm. Surfice is my Surface Renderer. It has a scripting language very similar to MRIcroGL (my volume renderer). You can use that to generate a mesh, and then export it to your favorite tools. You may also want to look at the Brainder page that describes creating interactie PDFs.

@meowlz for videos you probably want to include the command "bmpzoom(1);" in your script this controls the bitmap zoom function (which you can also adjust using the Preferences graphical interface). A bmpzoom(2) means that the image is generated at double resolution (e.g. four times as many pixels), which is nice for scientific publications but excessive for most videos.

Thank you so much for your very helpful responses. BMPZOOM(1); was indeed was I was looking for. Compared with CAMERADISTANCE(1.5) I now get png files of around 200kb. The visulisation of MRICRO works best for my use case and I was not happy with the results I got through surface rendering.

Now I wonder whether there is a tool I could showcase the mri cro volumen visualisations interactively in the browser, so the user can freely rotate the file similar like one can do it for surfaces with pycortex.

You could convert the bitmaps created by MRIcroGL to bitmaps to create an animated image. However, if you want an interactive browser tool that can fit into the browser you will need to find a volume renderer that supports WebGL. The version of WebGL supported by most contemporary browsers only support 2D textures, so your first task would be to convert your 3D NIfTI format image into a mosaic 2D in PNG format. Here is a small selection of WebGL viewers