MemoryError
cbaal83 opened this issue ยท 15 comments
Hi,
when i try to build the master using "make html" i get the following exceptions:
Exception occurred:
File "/usr/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/baal/.local/lib/python3.6/site-packages/sphinx/util/parallel.py", line 91, in _process
pipe.send((failed, collector.logs, ret))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
MemoryError
My machine has 16GB RAM, which get's completly used. Is there anything iam doing wrong?
Which CPU model do you have? Try building with a single CPU thread by replacing -j auto
in the Makefile with -j 1
, then run make html
again.
cat /proc/cpuinfo | grep "model name" gives
model name : Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
then run make html again
just started, will let you know once it's finished
That did work. But this way, it's obviously way slower.
Any suggestions?
//edit
Nvm, i figured once everything is build, it just rebuilds what has changed, once you reuse make html.
Thank you!
@cbaal83 It's still kind of surprising you're running out of RAM with only 4 jobs running (the i7-7500 has 4 cores/4 threads). I could see this happening if you have a ton of jobs running (e.g. with the Ryzen 3990X, which has a whopping 64 cores/128 threads).
That said, I'm not sure if we can do anything about it. Enabling parallel builds by default greatly improves usability for most people, so I'd rather not disable it.
Iam just about to rebuild eveything on my windows machine, which has a bit more steam then the other. It's using an I7-6700K @4gHz with 32GB RAM.
The make.bat is still configured as per default aka "set SPHINXOPTS=-j auto". Funny thing is, it's not making full usage of the available juice... My system is sitting at ~20%, evenly split across 8 threads. On my linux machine, the entire system was really under heavy pressure, even though it crashed after a while.
Overall building process feels much slower on windows then on linux.
Just wanted to let you know...
//edit:
just checked to see if it makes any difference: i changed fromj - auto to N=4 or N=8, makes no difference in performance. something with the parallel build is not working as indented...
Overall building process feels much slower on windows then on linux.
This is understandable, as raw I/O performance is often lower on Windows compared to Linux. Building documentation using Sphinx is a very I/O-consuming process due to the sheer number of files that need to be read and written to.
Iam definitly not I/O bound here...
Please read my edit :)
//edit: and your argument with the raw performance is definitly open for discussion :)
@cbaal83 your previous comment simply means you are not CPU bound, Calinou meant you could be storage I/O bound -- just note that we currently don't have any information about what kind of storage mechanism you are using (NVMe SSDs, old Hard Disk Drives, SSD-HDD hybrid, Network Mounts, etc) in this thread
I'd like to chime in here about this issue -- I'm also facing something quite similar:
# Loaded extensions:
# sphinx.ext.mathjax (2.4.1) from /usr/lib/python3.8/site-packages/sphinx/ext/mathjax.py
# sphinxcontrib.applehelp (1.0.1) from /usr/lib/python3.8/site-packages/sphinxcontrib/applehelp/__init__.py
# sphinxcontrib.devhelp (1.0.1) from /usr/lib/python3.8/site-packages/sphinxcontrib/devhelp/__init__.py
# sphinxcontrib.htmlhelp (1.0.2) from /usr/lib/python3.8/site-packages/sphinxcontrib/htmlhelp/__init__.py
# sphinxcontrib.serializinghtml (1.1.3) from /usr/lib/python3.8/site-packages/sphinxcontrib/serializinghtml/__init__.py
# sphinxcontrib.qthelp (1.0.2) from /usr/lib/python3.8/site-packages/sphinxcontrib/qthelp/__init__.py
# alabaster (0.7.12) from /usr/lib/python3.8/site-packages/alabaster/__init__.py
# gdscript (unknown version) from /home/wilson/gitrepos/godot-docs/extensions/gdscript.py
# sphinx_tabs.tabs (unknown version) from /home/wilson/gitrepos/godot-docs/extensions/sphinx_tabs/tabs.py
# sphinx.ext.imgmath (2.4.1) from /usr/lib/python3.8/site-packages/sphinx/ext/imgmath.py
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/sphinx/cmd/build.py", line 276, in build_main
app.build(args.force_all, filenames)
File "/usr/lib/python3.8/site-packages/sphinx/application.py", line 349, in build
self.builder.build_update()
File "/usr/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 297, in build_update
self.build(to_build,
File "/usr/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 361, in build
self.write(docnames, list(updated_docnames), method)
File "/usr/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 532, in write
self._write_parallel(sorted(docnames),
File "/usr/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 576, in _write_parallel
tasks.join()
File "/usr/lib/python3.8/site-packages/sphinx/util/parallel.py", line 107, in join
self._join_one()
File "/usr/lib/python3.8/site-packages/sphinx/util/parallel.py", line 112, in _join_one
exc, logs, result = pipe.recv()
File "/usr/lib/python3.8/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
This is currenlty happening in Arch Linux and although a memory error does not show in the log, all my swap space and memory is being consumed (about 16GB in total).
It probably makes sense to avoid building the class reference locally. This would greatly help with build times and memory usage. Unfortunately, I don't know how to do this without causing a lot of build-time errors. (Try removing the _classes
classes
folder then run make html
and see what happens.)
@Calinou I couldn't find a _classes
folder. I assume you meant the classes
folder at the root of the repo. I removed it, saw the build-time errors but it built successfully.
@cbaal83 your previous comment simply means you are not CPU bound, Calinou meant you could be storage I/O bound -- just note that we currently don't have any information about what kind of storage mechanism you are using (NVMe SSDs, old Hard Disk Drives, SSD-HDD hybrid, Network Mounts, etc) in this thread
@Rubonnek
As i said i was definitely not io bound. My hdd load was around 20% or so.
@Rubonnek Yes, I meant the classes
folder. I guess we can mention this possibility in the README, but also mention that will cause a lot of build-time errors due to missing references. If you do this, make sure to not use git add .
as it would cause all class files to be removed in the Git commit ๐