Correct handling of MAP_FULL exception
Opened this issue · 2 comments
I've set initial size to 4Gb, and when I attempt to reopen environment I get exception which for some reason is null, and after restart I see
Exception in thread "main" java.lang.ExceptionInInitializerError at rhinodog.Run.main.main(main.scala) Caused by: org.fusesource.lmdbjni.LMDBException: MDB_INVALID: File is not an LMDB file
and cannot reopen LMDB.
I'm blocking all the read threads that are trying to access Environment.
this.environment = new Env()
logger.debug("openEnvironment mapSize = {}", newSize)
environment.setMapSize(newSize)
environment.setMaxDbs(numberOfDatabases)
//TODO: VERIFY ASSUMPTION - Constants.NORDAHEAD hurts single threaded performance
//TODO: - improves multithreaded theoretically
val flag = if (this.storageMode == storageModeEnum.READ_ONLY) Constants.RDONLY else 0
environment.open(newDir.getPath, flag) // EXCEPTION == NULL IS THROWN HERE
this.postingsDB = environment.openDatabase("postings")
this.metadataDB = environment.openDatabase("metadata")
this.documentsDB = environment.openDatabase("documents")
this.numberOfDeletedDB = environment.openDatabase("numberOfDeletedByBlock")
this.roaringBitmapsDB = environment.openDatabase("roaringBitmaps")
this.term2ID_DB = environment.openDatabase("term2ID_DB")
this.ID2Term_DB = environment.openDatabase("ID2Term_DB")
New detail - exception at reopening is org.fusesource.lmdbjni.LMDBException: MDB_INVALID: File is not an LMDB file
Sorta fixed by changing the original MAP_SIZE and the step of size increase to 1GB. My code for increasing MAP_SIZE works until it reaches 5GB and fails miserably. I cannot even open the database after that.
Could be a LMDB bug, have you tried the same in C?