Running out of memory, catching the OSError exception
AUTplayed opened this issue · 3 comments
Hi, when I try to inspect a very large repository I'm getting these errors:
Exception in thread Thread-471:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/dist-packages/gitinspector/changes.py", line 131, in run
[self.first_hash + self.second_hash]), bufsize=1, stdout=subprocess.PIPE).stdout
File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 938, in _execute_child
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
Checking my memory usage results in 8GB/8GB used.
You probably won't be able to do anything against this, but just wanted to let you know..
@AUTplayed Gitinspector itself doesn't really require that much memory... All it does is keep a hashmap of authors/insertions+deletions and some other information... However, when running, it does starts multiple git instances.
Gitinspector will start as many git instances as there are cpu threads in the system. As you can see from the exception, this is also where things fail.
So if you are running out of memory, you could try decreasing NUM_THREADS in blame.py and changes.py. It will slow things down siginifcantly, but also keep memory usage down.
It might actually be a good idea to catch this exception and make the thread wait a bit. Maybe we could dynamically decrease NUM_THREADS when running out of memory.
Setting this for the 0.5.0 milestone.
when will publish the 0.5.0 version?
@370672701 When I have time to sit down with gitinspector. There is no estimated release date at the moment.