V2: RunValueLogGC(): Value log GC attempt didn't result in any cleanup
axllent opened this issue ยท 16 comments
Hi @ostafen - I'm noticing every 5 minutes a RunValueLogGC(): Value log GC attempt didn't result in any cleanup
in the logs - I know where it comes from in the code but ... I'm suspecting that the GC maybe not working as expected. To test I inserted 7.2GB of data yesterday (300k documents), and then deleted all the documents (so it's "empty" now). I's been > 24 hours now and I still have 7.2GB. Accessing the database is much slower than usual (probably because it now scans through 7GB of "empty" data to eventually return nothing, so the deleted data definitely appears to impact performance). I've inserted some more documents, deleted those etc etc (hoping some action would cause it to start working and prune the database), but it doesn't seem to be doing anything. All I see are those error messages on the 5 minute interval. I saw somewhere that you use a discard ratio of 0.5, however I would think that this situation should be a 100% discard ratio (or maybe 99.99%).
Are you able to shed some light as to how this is supposed to work exactly, and whether there is anything I can do in Mailpit (or CloverDB) to "help" it reclaim space once a whole lot of data is deleted? I know you said "if you try to perform several insertion and deletion sequences, you will reach a point where disk space is reclaimed", however I can't seem to get that to work despite there being 300,000 less documents in two separate catalogues. I have considered closing the database and physically deleting the files, then recreating, but that seems like a very extreme solution.
Any ideas / suggestions? Thanks!
Lastly (a separate "issue"), that error output appears to be hard coded, in that any app using CloverDB will display those errors regardless. Ideally I would prefer to hide it (or see it only in debug/verbose mode) rather than always displaying it. I know the output is coming from a goroutine - but would you confider (at some stage) a clover "flag" to be able to turn that particular GC error message off?
Hi, @axllent, thank you for reporting this. Regarding the first point, I'll do some experiment and try to figure out what the problem is and let you know. Meanwhile, have you checked if compaction is performed when the db is closed?
Regarding the logging problem, I was planning to use introduce a logging library (such as zap or logrus) and allow to set the logging level externally. Since you need it, it is something I will introduce in the next few days.
Hey @ostafen - you're a star, thank you! Please don't stress about the error message output, I don't need that soon, I just though I would mention it because I was talking about the GC stuff anyway :) As for the GC not reducing space however, that would be greatly appreciated. I don't know if it is useful, but I am using indexes, and looping through 2500 deletes at a time.
@axllent: I think I figured out the problem: it seems that badger computes stats on the files only when the db is closed.
So, as soon the server is running, even if the GC is called, he will reclaim no space, since stats are not updated.
Try to shutdown your server and restart it to see if space is reclaimed
@ostafen I've tried that many times over the last week, and even with this particular database today but it doesn't help unfortunately. 5 minutes after restarting I just get those gc "error" messages
package main
import (
"log"
"time"
"github.com/ostafen/clover/v2"
)
func checkErr(err error) {
if err != nil {
log.Fatal(err)
}
}
func main() {
db, err := clover.Open("./db", clover.WithGCReclaimInterval(time.Second*5))
checkErr(err)
defer db.Close()
has, err := db.HasCollection("test")
checkErr(err)
for i := 0; i < 10; i++ {
if !has {
err = db.CreateCollection("test")
checkErr(err)
n := 100000
for i := 0; i < n; i++ {
doc := clover.NewDocument()
doc.Set("timestamp", time.Now().UnixNano())
_, err := db.InsertOne("test", doc)
checkErr(err)
}
}
checkErr(db.DropCollection("test"))
}
}
Try to run this script several times. During the process, in another terminal, run the following command to monitor the size of the db:
watch ls -l db
What I see is that, at each Open()
, db reaches a near-zero size
But here you are dropping the collection, not just deleting the documents from it. Sorry, I'm in bed now, so I'll have to test tomorrow (it's after midnight :))
It is the same from the point of view badger, we are simply deleting records (DropCollection additionally removes collection metadata)
I'll experiment some more tomorrow then, thanks ๐
@ostafen I am beginning to suspect that it may have something to do with the 0.5 GCDiscardRatio
ratio clover uses (in a few GH issues on badger users were told to try values as low as 0.01). Internally I don't know exactly how badger works, and what (exactly) that ratio impacts in terms of its GC logic, but I suspect it may be the key here. I don't think there is any way for an app to override that 0.5 GC value in CloverDB either (like clover.WithGCReclaimInterval()
can ~ could this please be added)? I am beginning to think that value (0.5) means half of the "empty" space needs to be refilled before GC prunes deleted data.
In your test you are repeatedly inserting and deleting the same number of documents, and they are all very small. In my case, the document data is much bigger (physical emails), and we're not inserting and deleting the the same total number each time.
I have mocked up a more relative test which adds a +-10KB (10k bytes) string to a field as well as the timestamp. Initially it inserts 100k documents (approx 1GB db in total), then deletes them, and then 10x inserts & deletes 1,000 documents. https://gist.github.com/axllent/6d7f01846a184743838992965355bf79
When I run this test, yes I do see the size of the data sometimes reducing a little, but ultimately it leaves 750MB-1.1GB on disk (note: the test will create approximately 1.1GB of disk data). For the test I even added a 30 second pause at the end to let GC run a few times after just to make sure. Maybe this helps?
I am beginning to think that value (0.5) means half of the "empty" space needs to be refilled before GC prunes deleted data.
Discard Ratio refers to the amount of space in percentage that could be reclaimed for a log file in order to be selected during GC. 0.5 is the value recommended by badger documentation by default. Try to hardcode this value inside Clover in the option file to see if setting it to a lower value can solve your problem (I will export a function to allow to set it as an option).
I have found however several user dealing with similar issue. See for example:
The real problem here is that that many factor affect GC reclaiming, and as the author say, you can also expect that space will be eventually reclaimed. For example, statistics about the percentage of stale data of each file seems to be refreshed at db opening/close time, so setting the discard ratio to a lower value could also not solve your issue if you expect is to see that space is reclaimed almost in "real time".
You are right - setting it to 0.01
did not actually change anything in my test - it still remains at > 1GB, even when I close and reopen the empty DB. It sounds like this is completely out of your control, so not much (if anything) you can do here. As you say - eventually it should reclaim space - it's not very efficient at doing so :)
I think that I'll maybe implement zstd compression for the raw emails (stored in the database) to try save some space moving forward as this should at least save some space both in memory and on disk. The raw emails are not used for searching / filtering, they are simply parsed when an email is opened in the UI.
Feel free to mark this as resolved, unless you want to leave it here if you're going to add any functionality. Thanks again @ostafen :)
The irony of it all is I just checked the database directory of Mailpit which was still running from yesterday (which had several gigs of "empty" data), and it's now down to to almost nothing.... LOL. I'll make a metal note that pruning can take > 24 hours....
You could also try to reduce the frequency the garbage collector is called to see if space is reclaimed more often.
Anyway, the real problem is that badger doesn't give any warranty about this, so we should not make assumptions about disk size.
Plus the fact that I think need to close and re-open my database after a DeleteAll()
, something I would have preferred not to do. I do have some ideas though, but it involves locking all DB access when closing and reopening it, and each database function relying on a sync.RWMutex{}
- I'm sure that will take a performance hit, but more testing (for me) to do!
I don't suggest you to go for this approach, since you introduce a dependency in your code related to an internal implementation detail of clover (the fact that it uses badger), a part from making the code more complicated. Moreover, even if you take this approach, disk size could remain high
Hi all :) trying out the last version of the v2 branch and i got this error. The real issue is that inside clover storage, it's managed with a Fatal error. So as a result, my application exits every 5 mins with no way to recover from errors.
Is there a way to fix it? should i opening a PR with to allow for a non-fatal error log?