jcreinhold/intensity-normalization

Memory error using `ravel-normalize`

jimcost opened this issue · 39 comments

Hello!

In attempting to run ravel-normalize, using the command:

ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1

(all files MNI-registered in Nifti format). The data is a patient set of 10. I have run into this error:

ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1 
2019-04-16 11:19:53,762 - intensity_normalization.exec.ravel_normalize - ERROR - 
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/exec/ravel_normalize.py", line 86, in main
    use_fcm=not args.use_atropos)
  File "/usr/local/lib/python3.6/dist-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/normalize/ravel.py", line 93, in ravel_normalize
    _, _, vh = np.linalg.svd(Vc)
  File "/home/jatlab-remote/.local/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 1612, in svd
    u, s, vh = gufunc(a, signature=signature, extobj=extobj)
MemoryError

I executed ravel-normalize on two computers, both Ubuntu 18.04 LTS, one with 8GB and the other with 64GB of RAM. The T1s are ~7.0MiB/nifti and the masks are ~120KiB/nifti.

I searched Google for similar issues but without success. Any thoughts?

That is really surprising for the 64GB machine. On the 64GB machine, can you run the following commands in the python environment that has intensity-normalization installed:

import numpy.distutils.system_info as sysinfo
print(sysinfo.platform_bits)

and report back the result?

Certainly:

Python 3.6.7 (default, Oct 22 2018, 11:32:17) 
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from intensity_normalization.normalize import fcm
>>> import numpy.distutils.system_info as sysinfo
>>> print(sysinfo.platform_bits)                                                                                                                                    
64    

The masks are just brain masks, correct? Are they particularly large?

All the T1 scans are MNI registered, so I binarized a MNI152 brain into a mask (using FSL's MNI library). Each mask is ~121 KiB.

I've definitely run this on >10 images. Are the T1-w images gadolinium enhanced?

What version of numpy are you using? Run the following:

>>> import numpy as np
>>> np.__version__

I executed ravel-normalize on five patients, with the same issue:

2019-04-16 12:10:05,688 - intensity_normalization.exec.ravel_normalize - ERROR - 
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/exec/ravel_normalize.py", line 86, in main
    use_fcm=not args.use_atropos)
  File "/usr/local/lib/python3.6/dist-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/normalize/ravel.py", line 93, in ravel_normalize
    _, _, vh = np.linalg.svd(Vc)
  File "/home/jatlab-remote/.local/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 1612, in svd
    u, s, vh = gufunc(a, signature=signature, extobj=extobj)
MemoryError

Numpy version:

Python 3.6.7 (default, Oct 22 2018, 11:32:17) 
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.__version__
'1.16.2'

Is there way to access the code in intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/normalize/ravel.py? I have the feeling it may be due to the data no loading properly. By the way, the files are in ".nii.gz" format.

The source code is freely accessible in this repo. Here is the file you are looking for. I don't know what you mean the data is not loading properly. Is there something odd about your NIfTI images?

I'll run a small test and see if I get the same result. Can you list the names of the image and mask files? They need to have names that will align when you glob and sort them; that is, if you have the set:

T1/img1.nii.gz
T1/img2.nii.gz
...

then the masks need to be of the form:

mask/mask1.nii.gz
mask/mask2.nii.gz
...

You don't need the images in that same naming format, but there needs to be some consistency so that simple sorting aligns the image files. If the masks don't align, there will be a problem when we try to isolate the CSF which is used as control voxels.

The images are not gadolinium enhanced, correct?

There should be nothing wrong with the nifti images, I have loaded/inspected/analyzed them using at least three different packages.

The masks are identical, because the T1s are in the space MNI152 space. I simply copied the mask file and named it to correspond with the T1 naming scheme. Here is how the files are named:
image

You are correct, these T1s are not gadolinium (or otherwise) enhanced. Does RAVEL not work on contrast enhanced images?

The general methodology of RAVEL will work on gad-enhanced images; however, you need to isolate the CSF for the control voxels. In my implementation, I use fuzzy c-means to separate the three tissue classes: CSF, GM, and WM. Gadolinium enhancement messes up the assumptions required isolate CSF in this setup.

The image file names look good. Can you append -vv to your run and report the results. That is, run:

ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1 -vv

Nevermind. Just report it with one -v That is, run:

ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1 -v

I just ran a set of 5 images on a machine with 47.16 GB of RAM without a problem. I'm waiting for a run with 21 images to finish on the same machine. Are you using a shared machine? Are there memory-heavy processes being used in the background by other users?

I'll be curious to see the output of the command in my last comment, but I'm betting that FCM is failing or the deformable registration is failing. Could you run:

mkdir tissue_masks
python /path/to/intensity_normalization/exec/tissue_mask.py -i T1 -m mask -o tissue_masks -v

and examine the resulting tissue masks (which will be placed in the created tissue_masks directory). Verify that they make sense. If they do not, can you by chance share a picture of the image associated
with the first file processed as shown in the log? That is, the file associated with this line [note the (1/N)]:

2019-04-16 15:32:53,298 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image THIS_IMAGE_FILENAME (1/21)

Can you also provide the output from the following command:

pip list

pip3 list

antspy (0.1.4), asn1crypto (0.24.0), attrs (19.1.0), backcall (0.1.0), beautifulsoup4 (4.6.0), bleach (3.1.0), blinker (1.4), certifi (2019.3.9), chardet (3.0.4), command-not-found (0.3), cryptography (2.1.4), cupshelpers (1.0), cycler (0.10.0), Cython (0.29.6), decorator (4.4.0), defer (1.0.6), defusedxml (0.5.0), distro-info (0.18ubuntu0.18.04.1), entrypoints (0.3), feedparser (5.2.1), html5lib (0.999999999), httplib2 (0.9.2), idna (2.8), imageio (2.5.0), intensity-normalization (1.3.1), ipykernel (5.1.0), ipython (7.4.0), ipython-genutils (0.2.0), ipywidgets (7.4.2), jedi (0.13.3), Jinja2 (2.10.1), jsonschema (3.0.1), jupyter (1.0.0), jupyter-client (5.2.4), jupyter-console (6.0.0), jupyter-core (4.4.0), keyring (10.6.0), keyrings.alt (3.0), kiwisolver (1.0.1), language-selector (0.1), launchpadlib (1.10.6), lazr.restfulclient (0.13.5), lazr.uri (1.0.3), lightdm-gtk-greeter-settings (1.2.2), lxml (4.2.1), macaroonbakery (1.1.3), Mako (1.0.7), MarkupSafe (1.1.1), matplotlib (3.0.3), mistune (0.8.4), mpmath (1.1.0), nbconvert (5.4.1), nbformat (4.4.0), netifaces (0.10.4), networkx (2.3), nibabel (2.4.0), nose (1.3.7), notebook (5.7.8), numpy (1.16.2), oauth (1.0.1), oauthlib (2.0.6), pandas (0.24.2), pandocfilters (1.4.2), parso (0.4.0), patsy (0.5.1), pexpect (4.7.0), pickleshare (0.7.5), Pillow (5.1.0), pip (9.0.1), plotly (3.8.0), prometheus-client (0.6.0), prompt-toolkit (2.0.9), protobuf (3.0.0), ptyprocess (0.6.0), pycairo (1.16.2), pycrypto (2.6.1), pycups (1.9.73), Pygments (2.3.1), pygobject (3.26.1), PyJWT (1.5.3), pymacaroons (0.13.0), PyNaCl (1.1.2), pyparsing (2.4.0), pyRFC3339 (1.0), pyrsistent (0.14.11), python-apt (1.6.3+ubuntu1), python-dateutil (2.8.0), python-debian (0.1.32), pytz (2019.1), PyWavelets (1.0.3), pyxdg (0.25), PyYAML (3.12), pyzmq (18.0.1), qtconsole (4.4.3), reportlab (3.4.0), requests (2.21.0), requests-unixsocket (0.1.5), retrying (1.3.3), scikit-fuzzy (0.4.1), scikit-image (0.15.0), scikit-learn (0.20.3), scipy (1.2.1), scour (0.36), screen-resolution-extra (0.0.0), SecretStorage (2.3.1), Send2Trash (1.5.0), setuptools (41.0.0), simplejson (3.13.2), six (1.12.0), ssh-import-id (5.7), statsmodels (0.10.0.dev0+1072.g1ab30a7e1), sympy (1.4), system-service (0.3), terminado (0.8.2), testpath (0.4.2), tornado (6.0.2), traitlets (4.3.2), ubuntu-drivers-common (0.0.0), ufw (0.36), unity-scope-calculator (0.1), unity-scope-chromiumbookmarks (0.1), unity-scope-colourlovers (0.1), unity-scope-devhelp (0.1), unity-scope-firefoxbookmarks (0.1), unity-scope-manpages (0.1), unity-scope-openclipart (0.1), unity-scope-texdoc (0.1), unity-scope-tomboy (0.1), unity-scope-virtualbox (0.1), unity-scope-yelp (0.1), unity-scope-zotero (0.1), unity-tweak-tool (0.0.7), urllib3 (1.24.1), usb-creator (0.3.3), wadllib (1.3.2), wcwidth (0.1.7), webcolors (1.8.1), webencodings (0.5.1), wheel (0.30.0), widgetsnbextension (3.4.2), xkit (0.0.0), zope.interface (4.3.2)

Thanks. It looks good for the most part. I don't think this is an issue, but I would recommend updating antspy to v0.1.7. You can do this by downloading the repo here and running python setup.py install. It'll take an hour or so to install.

Before doing this, were you able to run the commands in my previous comments? Can you report the output of ravel-normalize with verbosity set higher, and can you examine the tissue masks to verify that they are sensible. Also, can you run:

free -h

and report this output when not running ravel-normalize; this shows the amount of memory available on the system and the amount currently being used.

Then, in python, can you run:

import struct
print(struct.calcsize("P") * 8)

and report that result.

FYI, my 21 image run on the machine with 47GB of RAM worked. If the output of the above commands make sense, then some part of the algorithm is probably failing because of a peculiarity of your dataset (which should not happen and I'll be happy to fix, if possible).

Thanks

Executing

ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1 -v

outputs

2019-04-16 15:03:48,979 - intensity_normalization.exec.ravel_normalize - INFO - Normalizing the images according to RAVEL
2019-04-16 15:03:49,686 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image TC00 (1/10)
2019-04-16 15:03:50,713 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image TC00 (1/10)
2019-04-16 15:04:28,108 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image TC01 (2/10)
2019-04-16 15:04:30,484 - intensity_normalization.normalize.ravel - INFO - Starting registration for image TC01 (2/10)
2019-04-16 15:05:02,378 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image TC01 (2/10)
2019-04-16 15:05:38,926 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image TC02 (3/10)
2019-04-16 15:05:41,320 - intensity_normalization.normalize.ravel - INFO - Starting registration for image TC02 (3/10)
2019-04-16 15:06:11,779 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image TC02 (3/10)
2019-04-16 15:06:37,696 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image TC03 (4/10)
2019-04-16 15:06:40,036 - intensity_normalization.normalize.ravel - INFO - Starting registration for image TC03 (4/10)
2019-04-16 15:07:11,738 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image TC03 (4/10)
2019-04-16 15:07:37,963 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image TC04 (5/10)
2019-04-16 15:07:40,346 - intensity_normalization.normalize.ravel - INFO - Starting registration for image TC04 (5/10)
2019-04-16 15:08:10,756 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image TC04 (5/10)
2019-04-16 15:08:46,946 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image TC05 (6/10)
2019-04-16 15:08:49,299 - intensity_normalization.normalize.ravel - INFO - Starting registration for image TC05 (6/10)
2019-04-16 15:09:21,523 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image TC05 (6/10)
2019-04-16 15:09:45,831 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image TC06 (7/10)
2019-04-16 15:09:48,203 - intensity_normalization.normalize.ravel - INFO - Starting registration for image TC06 (7/10)
2019-04-16 15:10:20,657 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image TC06 (7/10)
2019-04-16 15:10:56,706 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image TC07 (8/10)
2019-04-16 15:10:59,075 - intensity_normalization.normalize.ravel - INFO - Starting registration for image TC07 (8/10)
2019-04-16 15:11:31,305 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image TC07 (8/10)
2019-04-16 15:12:00,175 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image TC08 (9/10)
2019-04-16 15:12:02,523 - intensity_normalization.normalize.ravel - INFO - Starting registration for image TC08 (9/10)
2019-04-16 15:12:33,994 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image TC08 (9/10)
2019-04-16 15:13:09,161 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image TC09 (10/10)
2019-04-16 15:13:11,518 - intensity_normalization.normalize.ravel - INFO - Starting registration for image TC09 (10/10)
2019-04-16 15:13:40,527 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image TC09 (10/10)
2019-04-16 15:14:16,303 - intensity_normalization.normalize.ravel - INFO - Image 1 control voxel stats -  mean: -77.401, std: 6.404
2019-04-16 15:14:16,316 - intensity_normalization.normalize.ravel - INFO - Image 2 control voxel stats -  mean: -88.948, std: 7.568
2019-04-16 15:14:16,329 - intensity_normalization.normalize.ravel - INFO - Image 3 control voxel stats -  mean: -101.304, std: 12.849
2019-04-16 15:14:16,342 - intensity_normalization.normalize.ravel - INFO - Image 4 control voxel stats -  mean: -54.620, std: 2.266
2019-04-16 15:14:16,355 - intensity_normalization.normalize.ravel - INFO - Image 5 control voxel stats -  mean: -90.076, std: 6.578
2019-04-16 15:14:16,368 - intensity_normalization.normalize.ravel - INFO - Image 6 control voxel stats -  mean: -69.534, std: 3.292
2019-04-16 15:14:16,381 - intensity_normalization.normalize.ravel - INFO - Image 7 control voxel stats -  mean: -92.001, std: 6.397
2019-04-16 15:14:16,394 - intensity_normalization.normalize.ravel - INFO - Image 8 control voxel stats -  mean: -70.430, std: 5.629
2019-04-16 15:14:16,407 - intensity_normalization.normalize.ravel - INFO - Image 9 control voxel stats -  mean: -49.207, std: 3.729
2019-04-16 15:14:16,420 - intensity_normalization.normalize.ravel - INFO - Image 10 control voxel stats -  mean: -82.254, std: 6.261
2019-04-16 15:14:16,497 - intensity_normalization.exec.ravel_normalize - ERROR - 
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/exec/ravel_normalize.py", line 86, in main
    use_fcm=not args.use_atropos)
  File "/usr/local/lib/python3.6/dist-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/normalize/ravel.py", line 93, in ravel_normalize
    _, _, vh = np.linalg.svd(Vc)
  File "/home/jatlab-remote/.local/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 1612, in svd
    u, s, vh = gufunc(a, signature=signature, extobj=extobj)
MemoryError

I am running the commands on a server and I am the only current user. Its free memory:

free -h
              total        used        free      shared  buff/cache   available
Mem:            62G        3.4G         33G         43M         26G         58G
Swap:          2.0G          0B        2.0G

Lastly:

Python 3.6.7 (default, Oct 22 2018, 11:32:17) 
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import struct
>>> print(struct.calcsize("P") * 8)
64

I'll move on to your other suggestion(s) and get back to you.

Running

python /usr/local/lib/python3.6/dist-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/exec/tissue_mask.py -i ./T1 -m ./mask -o ./tissue_masks -v

outputs an error

/usr/bin/python: can't find '__main__' module in '/usr/local/lib/python3.6/dist-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/exec/tissue_mask.py'

Thoughts? I am in the process of searching for other locations of *intensity_normalization/exec* using find

Just download the repo to your home directory (or wherever) and run the command linking to the file in that newly downloaded repo. For instance, to do all this, you could run:

cd ~
git clone https://github.com/jcreinhold/intensity-normalization.git
cd path/to/experiment
python ~/intensity-normalization/intensity_normalization/exec/tissue_mask.py -i ./T1 -m ./mask -o ./tissue_masks -v

Example output of tissue_mask.py

image

Image information:

************************************************
Image:               "./T1/TC00.nii.gz"
************************************************
  Dimensions:        182 x 218 x 182
  Voxel size:        1 x 1 x 1
  Data strides:      [ -1 2 3 ]
  Format:            NIfTI-1.1 (GZip compressed)
  Data type:         32 bit float (little endian)
  Intensity scaling: offset = 0, multiplier = 1
  Transform:                    1          -0          -0         -91
                               -0           1          -0        -126
                                0           0           1         -72

Not sure how to interpret that. There should be three classes inside the brain mask.

Can you open the tissue mask example by itself and show that? Also, can you determine if there is only one integer value inside the brain mask? The output of tissue_mask should result in an image with 4 distinct values, {0,1,2,3}, where 0 is background, 1 is CSF, 2 is GM, and 3 is WM.

For instance, if I run tissue_mask I receive a following example output when viewed on MIPAV:

Screen Shot 2019-04-16 at 6 36 19 PM

Also, can you provide the original image without the overlay?

Ahh apologies,

image

Hmm. This looks reasonable. Can you examine the tissue masks corresponding to all other images and verify that they look similar to this?

I wonder if this is an issue with your numpy installation and the fact that it is using the default BLAS. Is there any chance you can install miniconda (i.e., anaconda without the bloat)? This will make debugging this problem much easier, and—in fact—if you can set that up, then you can run:

bash /path/to/intensity_normalization/create_env.sh --antspy --1.4

After that conda environment is setup, you can try to run the ravel-normalize command again. If it fails on importing ants, then you will have to build antspy from source (within the conda environment!).

Let me know if you run into any difficulty.

(intensity_normalization) jatlab-remote@jatlab-remote:~/intensity-normalization$ ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1 -v
Traceback (most recent call last):
  File "/usr/local/bin/ravel-normalize", line 6, in <module>
    from pkg_resources import load_entry_point
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3126, in <module>
    @_call_aside
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3110, in _call_aside
    f(*args, **kwargs)
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3139, in _initialize_master_working_set
    working_set = WorkingSet._build_master()
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 581, in _build_master
    ws.require(__requires__)
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 898, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 784, in resolve
    raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'antspy' distribution was not found and is required by intensity-normalization

It looks like it's point to python 3.7 but the python used should be python 3.6

(intensity_normalization) jatlab-remote@jatlab-remote:~/intensity-normalization$ ll $(which python)
lrwxrwxrwx 1 jatlab-remote jatlab-remote 9 Apr 17 14:04 /home/jatlab-remote/miniconda3/envs/intensity_normalization/bin/python -> python3.6*
(intensity_normalization) jatlab-remote@jatlab-remote:~/intensity-normalization$ ll $(which python3)
lrwxrwxrwx 1 jatlab-remote jatlab-remote 9 Apr 17 14:04 /home/jatlab-remote/miniconda3/envs/intensity_normalization/bin/python3 -> python3.6*

Oops. Yeah, you need to cd into the intensity_normalization directory and then run:

bash create_env.sh --antspy --1.4

Before running the above commands, restart your terminal and run:

conda remove -n intensity_normalization

Once all this is done, make sure that (intensity_normalization) appears to the left of the entry line in your terminal. If it does not, run source activate intensity_normalization, and then run ravel_normalize

For future readers - The latest version of Miniconda doesn't function with Python 3.6.7, requiring Python 3.7 or greater instead. The create_env.sh script didn't install ANTsPy for Python 3.7.x. This is why ravel-normalize caused this error:

(intensity_normalization) jatlab-remote@jatlab-remote:~/intensity-normalization$ ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1 -v
Traceback (most recent call last):
  File "/usr/local/bin/ravel-normalize", line 6, in <module>
    from pkg_resources import load_entry_point
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3126, in <module>
    @_call_aside
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3110, in _call_aside
    f(*args, **kwargs)
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3139, in _initialize_master_working_set
    working_set = WorkingSet._build_master()
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 581, in _build_master
    ws.require(__requires__)
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 898, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/home/jatlab-remote/miniconda3/lib/python3.7/site-packages/pkg_resources/__init__.py", line 784, in resolve
    raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'antspy' distribution was not found and is required by intensity-normalization

I installed Miniconda3-4.5.4 downloaded from https://repo.anaconda.com/miniconda/. This installed Python 3.6.8 but you can downgrade with conda install python=3.6.7

I executed bash create_env.sh --antspy --v1.4. Then, conda activate intensity_normalization, and executing ravel-normalize with inputs caused the following error:

 File "/home/jatlab-remote/miniconda3/envs/intensity_normalization/lib/python3.6/site-packages/skimage/util/arraycrop.py", line 8, in <module>
    from numpy.lib.arraypad import _validate_lengths
ImportError: cannot import name '_validate_lengths'

Which can be mended be upgrading scikit-image, pip install --upgrade scikit-image.

Now with "all functioning properly", I re-ran ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1 -v. Output:

2019-04-18 14:04:15,063 - intensity_normalization.exec.ravel_normalize - ERROR - 
Traceback (most recent call last):
  File "/home/jatlab-remote/miniconda3/envs/intensity_normalization/lib/python3.6/site-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/exec/ravel_normalize.py", line 86, in main
    use_fcm=not args.use_atropos)
  File "/home/jatlab-remote/miniconda3/envs/intensity_normalization/lib/python3.6/site-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/normalize/ravel.py", line 93, in ravel_normalize
    _, _, vh = np.linalg.svd(Vc)
  File "/home/jatlab-remote/.local/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 1612, in svd
    u, s, vh = gufunc(a, signature=signature, extobj=extobj)
MemoryError

Any ideas?

I'm sorry for all the confusion regarding the installation process. I meant for this to be as easy as possible, and I forgot the note I put in the create_env file, which says to do:

. ./create_env [FLAGS]

The preceding dot is necessary for the full installation to work properly. I've been writing these responses too quickly because I'm busy. Sorry for the oversight and for wasting your time on trying to debug the installation.

Can you report back the output of

conda list -n intensity_normalization

All good. In the configuration below, I installed ANTsPy from source:

(intensity_normalization) jatlab-remote@jatlab-remote:~/intensity-normalization$ conda list -n intensity_normalization                  
# packages in environment at /home/jatlab-remote/miniconda3/envs/intensity_normalization:
#
# Name                    Version                   Build  Channel
alabaster                 0.7.12                   py36_0  
antspy                    0.1.7                    pypi_0    pypi
asn1crypto                0.24.0                   py36_0  
babel                     2.6.0                    py36_0  
blas                      1.0                         mkl  
ca-certificates           2019.3.9             hecc5488_0    conda-forge
certifi                   2019.3.9                 py36_0    conda-forge
cffi                      1.12.2           py36h2e261b9_1  
chardet                   3.0.4                    py36_1  
cloudpickle               0.8.1                      py_0  
commonmark                0.7.5                      py_0    conda-forge
coverage                  4.5.3            py36h7b6447c_0  
cryptography              2.6.1            py36h1ba5d50_0  
cycler                    0.10.0                   py36_0  
cytoolz                   0.9.0.1          py36h14c3975_1  
dask-core                 1.2.0                      py_0  
dbus                      1.13.6               h746ee38_0  
decorator                 4.4.0                    py36_1  
docutils                  0.14                     py36_0  
expat                     2.2.6                he6710b0_0  
fontconfig                2.13.0               h9420a91_0  
freetype                  2.9.1                h8a8886c_1  
future                    0.17.1                py36_1000    conda-forge
glib                      2.56.2               hd408876_0  
gst-plugins-base          1.14.0               hbbd80ab_1  
gstreamer                 1.14.0               hb453b48_1  
h5py                      2.9.0           nompi_py36hf008753_1102    conda-forge
hdf5                      1.10.4          nompi_h3c11f04_1106    conda-forge
icu                       58.2                 h9c2bf20_1  
idna                      2.8                      py36_0  
imageio                   2.5.0                    py36_0  
imagesize                 1.1.0                    py36_0  
intel-openmp              2019.3                      199  
jinja2                    2.10.1                   py36_0  
jpeg                      9b                   h024ee3a_2  
kiwisolver                1.0.1            py36hf484d3e_0  
libedit                   3.1.20181209         hc058e9b_0  
libffi                    3.2.1                hd88cf55_4  
libgcc-ng                 8.2.0                hdf63c60_1  
libgfortran-ng            7.3.0                hdf63c60_0  
libiconv                  1.15                 h63c8f33_5  
libpng                    1.6.36               hbc83047_0  
libstdcxx-ng              8.2.0                hdf63c60_1  
libtiff                   4.0.10               h2733197_2  
libuuid                   1.0.3                h1bed415_2  
libxcb                    1.13                 h1bed415_1  
libxml2                   2.9.9                he19cac6_0  
markupsafe                1.1.1            py36h7b6447c_0  
matplotlib                3.0.2            py36h5429711_0  
mkl                       2019.3                      199  
mkl_fft                   1.0.10           py36ha843d7b_0  
mkl_random                1.0.2            py36hd81dba3_0  
ncurses                   6.1                  he6710b0_1  
networkx                  2.3                        py_0  
nibabel                   2.4.0                    pypi_0    pypi
nose                      1.3.7                    py36_2  
numpy                     1.15.4           py36h7e9f1db_0  
numpy-base                1.15.4           py36hde5b4d6_0  
olefile                   0.46                     py36_0  
openssl                   1.1.1b               h14c3975_1    conda-forge
packaging                 19.0                     py36_0  
pandas                    0.23.4           py36h04863e7_0  
patsy                     0.5.1                      py_0    conda-forge
pcre                      8.43                 he6710b0_0  
pillow                    5.3.0            py36h34e0f95_0  
pip                       19.0.3                   py36_0  
pycparser                 2.19                     py36_0  
pydicom                   1.2.2                      py_0    conda-forge
pygments                  2.3.1                    py36_0  
pyopenssl                 19.0.0                   py36_0  
pyparsing                 2.4.0                      py_0  
pyqt                      5.9.2            py36h05f1152_2  
pysocks                   1.6.8                    py36_0  
python                    3.6.7                h0371630_0  
python-dateutil           2.8.0                    py36_0  
pytz                      2019.1                     py_0  
pywavelets                1.0.3            py36hdd07704_1  
qt                        5.9.7                h5867ecd_1  
readline                  7.0                  h7b6447c_5  
requests                  2.21.0                   py36_0  
scikit-image              0.15.0                   pypi_0    pypi
scikit-learn              0.20.1           py36hd81dba3_0  
scipy                     1.1.0            py36h7c811a0_2  
setuptools                41.0.0                   py36_0  
sip                       4.19.8           py36hf484d3e_0  
six                       1.12.0                   py36_0  
snowballstemmer           1.2.1                    py36_0  
sphinx                    2.0.1                      py_0  
sphinx-argparse           0.2.5                      py_0    conda-forge
sphinxcontrib-applehelp   1.0.1                      py_0  
sphinxcontrib-devhelp     1.0.1                      py_0  
sphinxcontrib-htmlhelp    1.0.2                      py_0  
sphinxcontrib-jsmath      1.0.1                      py_0  
sphinxcontrib-qthelp      1.0.2                      py_0  
sphinxcontrib-serializinghtml 1.1.1                      py_0  
sqlite                    3.27.2               h7b6447c_0  
statsmodels               0.9.0           py36h3010b51_1000    conda-forge
tk                        8.6.8                hbc83047_0  
toolz                     0.9.0                    py36_0  
tornado                   6.0.2            py36h7b6447c_0  
urllib3                   1.24.1                   py36_0  
webcolors                 1.8.1                      py_0    conda-forge
wheel                     0.33.1                   py36_0  
xz                        5.2.4                h14c3975_4  
zlib                      1.2.11               h7b6447c_3  
zstd                      1.3.7                h0b5b093_0  

Man, this is weird. This setup does not appear to have any meaningful difference from my setup. Can you run:

ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1 -v -t 0.9999

and report back the results? Also, did all of the tissue masks look similar to the one you showed here? If this doesn't work I'll add a more memory-efficient SVD call for you to try.

Yes, all the tissue masks look similar/reasonable. Same error running above line of code:

(intensity_normalization) jatlab-remote@jatlab-remote:~$ ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1 -v -t 0.9999
...
2019-04-19 10:12:06,571 - intensity_normalization.exec.ravel_normalize - ERROR - 
Traceback (most recent call last):
  File "/home/jatlab-remote/miniconda3/envs/intensity_normalization/lib/python3.6/site-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/exec/ravel_normalize.py", line 86, in main
    use_fcm=not args.use_atropos)
  File "/home/jatlab-remote/miniconda3/envs/intensity_normalization/lib/python3.6/site-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/normalize/ravel.py", line 93, in ravel_normalize
    _, _, vh = np.linalg.svd(Vc)
  File "/home/jatlab-remote/.local/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 1612, in svd
    u, s, vh = gufunc(a, signature=signature, extobj=extobj)
MemoryError

I added more memory efficient SVD routines. Please uninstall intensity-normalization and re-install it (you don't have to re-install the entire environment, just the package). Then try rerunning ravel-normalize as normal. If that produces the same error, then append --sparse-svd to the list of command line arguments and run again.

@jimcost Did the updates solve the memory problem?

Yes it worked! I did not use the --sparse-svd flag. Below is the output of RAVEL from your code (right brain) and the R package from the authors of the method (left brain). The intensities are too small to be seen but are slightly different. Though I believe this to be acceptable due to differences in inputs: right brain range (-86.17, 28.25), left brain range (-84.18, 26.58).

image

For completion's sake, what did you change to remove the error?

Great news! All I did was change the call to numpy svd to include full_matrices=False.

There are several small differences in implementation that will lead to slightly different performance, but I stepped through the R and my implementation and checked the output at every step and—except for one step with some numerical error—it is identical. There are some other differences in implementation due to different package support in python and R (e.g., I use kernel density estimate in WhiteStripe to get a smoothed histogram whereas they use a generalized additive model, because the support for GAMs in python was poor at the time I developed the app). These differences will lead to unavoidable, but small, differences in the output of the two implementations.

If you don't mind, could you run one more thing to check that the results make sense? Could you activate the intensity_normalization conda env and run:

plot_hists -i /path/to/results/directory/ -m /path/to/brain/masks/ -o python_ravel_hist.png

and put up the figure here? Also, if you could run it on the original RAVEL dataset and post that picture as well that would be very helpful to making sure that the algorithm is working as expected. It might also be useful to look at the histograms of the original, un-normalized images as well to see how the group changed.

Thanks for the feedback!

I'm closing this since it seems the problem is fixed, but please do take a look at the histograms and verify that the output makes sense and closely aligns with what was output by the original R implementation. I'd be curious either way to hear/see the result!

T1 input histograms:
python_T1_hist

RAVEL output histograms:
python_ravel_hist

Thanks! If you have a chance, does this look comparable to the output of the R implementation?

The green one seems pretty odd, but it also has a much different shape in the original compared the other samples. The WM peak and CSF seem to be fairly well aligned though, which is what RAVEL (as implemented with the default settings) intends to provide.

Thanks for the feedback

Fortin RAVEL output histograms:

python_Rravel_hist

In terms of histogram smoothness, your implementation performs better. For WM and CSF, which peak is which?

Thanks! The WM is the largest peak (the right-most one) and the CSF is the left-most peak. If there is a peak in the middle, that corresponds to GM (in non-enhanced T1-w images)