mochajs/mocha

out of memory issue: garbage collection ?

TomVanHaver opened this issue ยท 12 comments

Recently we ran into out of memory issues (in node) when executing our tests via mocha.

We are running about 5000 tests/specs and are guessing it has something to do with closures and garbage collection.

For now we have logically splitted our runs into multiple ones, but it is just a matter of time before we encounter the same problems.

I am certain this issue will affect other teams also.
Can someone tell me how we can pinpoint/analyze/solve the problem?

Thank you very much in advance.

As reference https://github.com/mochajs/mocha/pull/2037/files#diff-b3b53682a18f203ac8d29b0e277cad26R749

Would it be possible that this handling should be done after each root describe block not only after a bunch of files which get threated as a single suite (as far as i can see) gets done ?

So multiple describe blocks in multiple files should be the right way of testing this

anyone ?

bump :-)

I'm not sure of the best way to diagnose memory leaks in Node (would love some assistance here), but otherwise I can't be of too much help without seeing your test(s).

Thank you for the feedback.

The problem is that as soon as the memory problems arise, a random test fails.
Therefore it is not possible to give you one "failing" test :-/

Since we believe our test setup is in line with the common approach, there must be other teams that face these kind of memory problems?

FWIW, I am running into a similar symptom, not sure if the cause is the same. I'm converting a nodeunit test suite to mocha.

@TomVanHaver I've found out what is retaining memory in my particular case, sharing it here in case its something that might help you:

Our application builds out a large graph object with nodes. In each of our tests, we build out a new graph. I found out that a few of these nodes were calling the js setInterval, which caused the memory of the entire graph to stick around even when each test case was over. Clearing these intervals fixed my problem.

The direction of approach that finally got me here was this:

  1. Create 2 mocha test files with your standard setup/teardowns
  2. Add a lot (hundreds) of empty test cases like it('test', () => {assert(true)}) in each test file
  3. Create a before and after in each file and use heapdump to take heapsnapshots
  4. Compare the sizes of the snapshots, and perhaps zoom into beforeEach/afterEach snapshots
  5. Start commenting out code in your setup/teardowns and observe if the snapshot sizes change significantly.

Hopefully that helps you track down your memory retainment.

I'm going to close this; if someone can find a leak in Mocha itself, please create a new issue!

Anyone encountering this should be aware of leakage to track down memory leaks in their code.

i agree, but we are still looking how to track leaks in specs or to isolate suites.

@TomVanHaver I might be missing something, but isn't this exactly what leakage is for?

The following will throw if the spec is leaking. It does require you to actually modify the test code by wrapping the function, though, which is a bit of a pain (tool idea: use a codemod to dynamically create a version of your test suite where each test is wrapped in a leakage block, cleaning up after finish).

describe('myLib', () => {
  it('does not leak when doing stuff', () => {
    iterate(() => {
      const instance = myLib.createInstance()
      instance.doStuff('foo', 'bar')
    })
  })
})

@fatso83 That might work be can also give false positives due the fact your describe can be still holding on to stuff see

Suite.prototype.cleanReferences = function cleanReferences() {