jean-zay-users/jean-zay-doc

Error when installing miniconda following the tips and tricks (v2)

Opened this issue · 4 comments

It breaks

PREFIX=/my/home/miniconda3
Unpacking payload ...
concurrent.futures.process._RemoteTraceback:                                    
'''
Traceback (most recent call last):
  File "concurrent/futures/process.py", line 387, in wait_result_broken_or_wakeup
  File "multiprocessing/connection.py", line 256, in recv
TypeError: __init__() missing 1 required positional argument: 'msg'
'''

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "entry_point.py", line 69, in <module>
  File "concurrent/futures/process.py", line 562, in _chain_from_iterable_of_lists
  File "concurrent/futures/_base.py", line 609, in result_iterator
  File "concurrent/futures/_base.py", line 446, in result
  File "concurrent/futures/_base.py", line 391, in __get_result
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
[1730228] Failed to execute script 'entry_point' due to unhandled exception!

my space:

[username@jean-zay3: username]$ idrquota -w
WORK: 1121 / 5120 Gio (21.89%)
WORK: 493163 / 500000 inodes (98.63%)

I only have a repo that is not very big in $WORK, is the inodes limit the reason?

Hi. Could you please edit out your login and group name from your message?

Hi @franchesoni ,

I only have a repo that is not very big in $WORK, is the inodes limit the reason?

Yes, it seems linked to the amount of available inodes at $WORK. Maybe you can ask to the Idris support to increase the number of inodes. (First time that I heard about this problem, probably the recent releases of miniconda have multiplied the number of files to install).

Maybe you can ask to the Idris support to increase the number of inodes.

It's likely that this request will be rejected. We usually don't allocate more than 500k inodes due to the technical limitations of parallel filesystems.

It's likely that this request will be rejected. We usually don't allocate more than 500k inodes due to the technical limitations of parallel filesystems

I see. I just counted the number of files in a fresh installation of miniconda and I found less than 75000 files. So, the limit of 500k inodes seems reasonable.

@franchesoni : the problem isn't linked to the number of files installed by miniconda. You should probably see into other folders inside your $WORK. If you have cloned git repositories be careful of the hidden .git folder. It may contains a lot of small inside and you can easily reach the max inodes limit.