Badger datastore performance with English snapshot
lidel opened this issue ยท 9 comments
badger shows consistent issues in go-ipfs 0.8.0 with 300GB of wikipedia_en_all_maxi_2021-02
Issues
- oom-killer stopped the process at 30%, then 60%
- unable to open resulting repo with 32GB of RAM (churns until oom-killer kicks in)
- unable to open when RAM limited to 20G, errors after ~30s:
$ firejail --noprofile --rlimit-as=20000000000 ipfs daemon Error: Opening table: "/media/1tb/projects/wikipedia/ipfs-repo/en/badgerds/189654.sst": Unable to map file: "189654.sst": cannot allocate memory
How to reproduce
Reproduction of relevant import steps:
- go-ipfs 0.8.0 from https://dist.ipfs.io/#go-ipfs
zimdump
from https://download.openzim.org/nightly/2021-02-12/zim-tools_linux-x86_64-2021-02-12.tar.gz- download wikipedia_en_all_maxi_2021-02.zim (~80GB)
- unpack ZIM (requires ~300GB, ~20 000 000 files)
$ zimdump dump --dir=wikipedia_en_all_maxi_2021-02 wikipedia_en_all_maxi_2021-02.zim
- Add unpacked archive to a fresh ipfs repo with badger backend:
$ ipfs init -p badgerds --empty-repo
$ ipfs config --json 'Experimental.ShardingEnabled' true
$ ipfs add -r --cid-version 1 --pin=false --offline -Qp ./wikipedia_en_all_maxi_2021-02/
Would be useful if someone reproduced the memory issue so we know its not specific to my box.
Things to try
- tweak badger
- add better GC to go-ipfs to avoid oom-killer https://github.com/raulk/go-watchdog
- port badger settings from lotus: https://github.com/filecoin-project/lotus/blob/master/node/repo/blockstore_opts.go#L21-L42
@lidel : can/should we add a regression test for this so we don't do releases that fail for this usecase?
@BigLep ideally, yes, but I'd say it is not feasible at this stage:
- badgerv1 datastore is not the default in go-ipfs (flatfs is)
- there is badgerv2
- in general, badger in go-ipfs needs some work. in the end, we may end up not switching default to badger2/3 due to issues like this (and other ones in v2, found out by Lotus team)
- running this type of import test requires expensive CI setup โ I'd rather have us testing default than opt-in features first (and we don't have any tests like this running on CI for flatfs datastore)
For the record, I switched to flatfs
with sync
set to false
and ipfs add
in offline mode finished under 5h, which is still acceptable given no memory issues, no ridiculous CPU load. I'll be switching README and scripts to use that instead of badger.
Another badger issue: ipfs/go-ds-badger#111 (panic: slice bounds out of range)
The panic turned out to be a broken datastore (went away after re-generating).
New/old issue tho:
I was unable to pin this English snapshot from #92 to a second localhost node running badger.
32GB of RAM was not enough for this, ipfs daemon doing the pinning gets killed (same issue as I originally had during ipfs add
). Will try to pin it elsewhere to see if the issue with badger can be reproduced outside my box.
I will also try to retry with MaxTableSize set to 64MB (go-ipfs uses 16MB to allocate less memory up front)
Tried finishing pinning English with go-ds-badger patched with:
--- a/datastore.go
+++ b/datastore.go
@@ -107,13 +107,13 @@ func init() {
DefaultOptions.Options.ValueLogLoadingMode = options.FileIO
// Explicitly set this to mmap. This doesn't use much memory anyways.
- DefaultOptions.Options.TableLoadingMode = options.MemoryMap
+ DefaultOptions.Options.TableLoadingMode = options.FileIO
// Reduce this from 64MiB to 16MiB. That means badger will hold on to
// 20MiB by default instead of 80MiB.
//
// This does not appear to have a significant performance hit.
- DefaultOptions.Options.MaxTableSize = 16 << 20
+ DefaultOptions.Options.MaxTableSize = 64 << 20
}
Helped a bit, but crashed again after 9h (memory limited to 20GB).
I'll retry with empty repo just to be sure it is not due to left-overs created with old values.
This most likely could be solved by throwing enough RAM at nodes pinning the data, but pinning an existing Wikipedia snapshot should work on a consumer-grade PC.
No luck. Next step is to backport this fix for ipfs/go-ds-badger#86 and build go-ds-badger against patched badger. nevermind.
Ended up using flatfs with sync
set tofalse
. Takes time, but can finish on desktop hardware without balooning RAM usage.
is this project using the Zim dumps rather than Wikimedias internal MWDumper.pl and related XML to MySQL import processes? I successfully got a working database from this process, and prefer not having to have another middle man in the dump and sync process ... ideally we would speed-improve the daily dump sync and automate it ... that's what Wikipedia is asking for.
We should have this thing "live and usable" and always current.
9546678083 if you can or need help.
@alzinging We were doing like this 15 years ago... wish you good luck with this approach ;) Your comment is anyway pretty out of topic in this ticket.