webpack-contrib/terser-webpack-plugin

Process aborts with 'out of memory' when using 2.0.0

nick opened this issue Β· 82 comments

nick commented
  • Operating System: OSX 10.14.6
  • Node Version: 10.16.0
  • NPM Version: 6.9.0
  • webpack Version: 4.39.3
  • terser-webpack-plugin Version: 2.0.0

Expected Behavior

Process does not abort

Actual Behavior

$ NODE_ENV=production ./node_modules/.bin/webpack --loglevel notice

<--- Last few GCs --->

[84294:0x102843000]    55749 ms: Mark-sweep 1312.4 (1444.3) -> 1305.0 (1446.3) MB, 622.0 / 0.0 ms  (average mu = 0.099, current mu = 0.040) allocation failure scavenge might not succeed
[84294:0x102843000]    56388 ms: Mark-sweep 1315.7 (1446.3) -> 1308.6 (1448.8) MB, 613.1 / 0.0 ms  (average mu = 0.070, current mu = 0.040) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x3aa9ebf5be3d]
Security context: 0x37c8eea1e6e9 <JSObject>
    1: /* anonymous */(aka /* anonymous */) [0x37c895d8d969] [/Users/nick/Projects/origin/node_modules/terser-webpack-plugin/node_modules/webpack-sources/lib/applySourceMap.js:~58] [pc=0x3aa9ed4fe2c4](this=0x37c83a5026f1 <undefined>,chunk=0x37c803a7d979 <String[14]: createElement(>,middleMapping=0x37c8171046d9 <Object map = 0x37c835e45c79>)
    2: SourceNode...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003cf99 node::Abort() [/usr/local/bin/node]
 2: 0x10003d1a3 node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
 3: 0x1001b7835 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 4: 0x100585682 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
 5: 0x100588155 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/usr/local/bin/node]
 6: 0x100583fff v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
 7: 0x1005821d4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
 8: 0x10058ea6c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/usr/local/bin/node]
 9: 0x10058eaef v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/usr/local/bin/node]
10: 0x10055e434 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/usr/local/bin/node]
11: 0x1007e6714 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node]
12: 0x3aa9ebf5be3d 
13: 0x3aa9ed4fe2c4 
14: 0x3aa9ed4b7d28 
Abort trap: 6

Code

https://github.com/OriginProtocol/origin/blob/master/dapps/marketplace/webpack.config.js

How Do We Reproduce?

git clone https://github.com/OriginProtocol/origin.git
cd origin
# Update dapps/marketplace/package.json to use v2.0.0 of terser plugin
yarn
cd dapps/marketplace
yarn build

Thanks for issue, investigate

Just information, change this line to:

"build:js": "NODE_ENV=production webpack --progress --loglevel notice",

and see what happens.

Error appears on source map generation step:

93% after chunk asset optimization SourceMapDevToolPlugin app.9949447c.js generate SourceMap`

Use sourceMap: false fix problem, increase memory for node also fix problem. I keep investigation.

Looks terser leaking

Why problem appears:
Here we change logic 8b88b39

Before for only one file we run new thread for uglification, now we use same thread for this. Removing tasks.length > 1 from https://github.com/webpack-contrib/terser-webpack-plugin/blob/master/src/TaskRunner.js#L48 fix problem. Why we do this - try to reduce memory and cpu usage when you have only one file (creating new thread takes more memory and increase cpu loading).

But another problem occurred - memory leak. terser something doesn’t clear therefore the current thread began to consume more memory.

Simple investigation, i used this code:

const used = process.memoryUsage().heapUsed / 1024 / 1024;
console.log(`The script uses approximately ${Math.round(used * 100) / 100} MB`);

and put this lines after https://github.com/webpack-contrib/terser-webpack-plugin/blob/master/src/index.js#L272

New version (we don't create new thread for one file):

The script uses approximately 1016.37 MB

Old version (new thread for one file):

The script uses approximately 362.56 MB

Around 700 MB leaking

/cc @fabiosantoscode

/cc @nick also cache is not working with your code, because you always generate something new in code, don't use:

BUILD_TIMESTAMP: +new Date()

Caches is not working in your app, long cache term also

@nick try to use https://github.com/webpack-contrib/terser-webpack-plugin/releases/tag/v2.0.1, anyway we need investigate why terser consume a lot of memory

Let's wait @fabiosantoscode answer

nick commented

Thanks for looking this so quickly, @evilebottnawi - and thanks for the tip on the cache issue 😊

Can confirm 2.0.1 works as before. Feel free to close unless you want to keep it open for the memory issue.

Interesting. I'll try to come up with the exact tag where this started to happen.

But I kind of have something else I have to fix first, though.

@fabiosantoscode πŸ‘ yep, feel free to ping me i can help you with memory leak searching

I tried creating a file that attempts to encrypt a lot of things to try and create a memory leak, but I can't reproduce this at all. I looked into the commit that fixes this. Isn't this just a 2.0.0 specific problem?

@fabiosantoscode no, we just run terser in separate thread instead using currently, but looks terser have memory leak, because no problem in parallel mode, but problem when you use only one thread

Can this be reproduced with the default options? The Terser tests aren't leaking memory at all.

In my experience, there hasn't been any significant difference in memory usage. Both V1 and V2 are hovering around 500MB of memory (and same compilation time) for one of our smaller apps. Measured with NODE_ENV=production time -l ... on MacOS. The dependencies are basically the same (Node, Webpack, etc).

@eliseumds it is expected because terser still consume many memory, need search way to optimize terser and decrease uglify time

Build time more than doubled from 6 minutes to 14 minutes in one of our bigger projects by just updating to version 2 of this dependency.

Our build time also doubled when upgrading to version 2

@rrelmy @JoshRobertson sorry but your comments is not helpful. Please provide information what version you use before, also we have breaking change and generate source maps bsed on devtool value, so now you can generate source map and it is increase build time

korya commented

Same story here. After we upgraded terser-webpack-plugin from v1.4.1 to v2.2.1 the build process started to run out of memory. As a result, our CI builds started to fail. We have some big chunks though: 5 over 1MB, 2 over 2MB. Degrading back to v1.4.1 resolves the problem. Turning off parallel build resolves the problem as well.

The error that we get (it is reported 3 times in a row):

ERROR in c/92f867d3e514eb399b9d.js from Terser
Error: Call retries were exceeded
    at ChildProcessWorker.initialize (/home/circleci/workdir/project/node_modules/jest-worker/build/workers/ChildProcessWorker.js:193:21)
    at ChildProcessWorker.onExit (/home/circleci/workdir/project/node_modules/jest-worker/build/workers/ChildProcessWorker.js:263:12)
    at ChildProcess.emit (events.js:210:5)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)

Below are the results summarized in a table:

version parallel build outcome
v1.4.1 Y OK
v2.2.1 Y FAILURE: runs out of memory
v2.2.1 N OK

Note: in all 3 cases above everything else remained unchanged, only 2 variables changed: the version and the parallel option of the plugin.

@korya can you create reproducible test repo?

korya commented

@evilebottnawi Sorry, I will probably won't able to do it. Our repository is private, and coming up with an artificial example to reproduce this issue would be a nightmare.

It seems that the memory consumption of the plugin just grew up. The v1.4.1 is OK, but it is probably very close to hitting the memory limit. The v2.2.1's memory consumption grew up (maybe a lot, but maybe just a bit) and it started to consistently hit the limit. This is my very surface analysis of our problem. I would even call it an intuition rather than analysis. I also noticed that the number of transitive dependencies of the plugin grew up considerably in v2.2.1.

Again these are just my thoughts, I did not perform any deep analysis. I am sorry that I cannot help any more than that. If I will have some time, I will try to gather more info. But I don't believe I can easily come up with a reproducible test.

@korya anyway thanks for feedback, yes new version require more memory:

  • now we don't break source maps anymore
  • better parallel support

All this requires a little more memory. We have an issue to reduce memory/cpu usage terser/terser#478.

@korya We had the same error and the same behaviour.

The childprocesses immediately return with an error code of 1 and after a number of retries the error message occurs. I turned off the silent option in ChildProcessWorker.js (of jest-worker).
Then i got following error message: error: unknown option '--trace-deprecation'

Our build scripts had --trace-deprecation enabled and somehow the childprocess is not able to handle this option. I just disabled it for now.

Temporarily replace silent: true with silent: false in node_modules\jest-worker\build\workers\ChildProcessWorker.js and check for error messges.

@mateder Can you create minimum reproducible test repo?

Same problem started to appear for us & nothing changed other than adding a new package to our build. In our case also, parallel doesn't make much of a difference instead the issue is sourceMaps if set to false everything works fine. Otherwise out of memory errors.

I tried these versions, 1.3.0 (the one we have), 1.4.1 & 2.2.1 all of them fail.

@ahmedelgabri maybe you can create reproducible test repo, i will investigate that

@evilebottnawi Sorry, I will probably won't able to do it. Our repository is private, and coming up with an artificial example to reproduce this issue would be a nightmare.

@evilebottnawi Same here. The problem as I see it is that when the number of chunks grow (in number & also in size) the plugin can't handle this anymore.

@ahmedelgabri can you try to increase nodejs memory usage? Sometimes it is impossible to solve without increase memory

Yeah I was just about to do that after reading facebook/docusaurus#1782, but this is just a workaround…

I think its also worth it to investigate if it's solely terser-webpack-plugin problem. The thing is, a lot of other webpack plugins also consume a lot of memories, and that makes the nodejs garbage collection slow. It just happens that terser-webpack-plugin also uses lot of memory and happen to be run at the last part of compilation, making it seems it consumes a lot of memory.

In particular sourcemaps is the biggest memory-eater πŸ˜ƒ

I think it definitely is a terser issue, because normally our build without terser is ~2mins with terser is ~9mins & in this case – out of memory issue – if I disable terser's sourcemap option (or even disable minification completely) everything works fine. But if I enable sourcemap in terser I get the out of memory issue.

Also it always fail on this step

 98% [0] chunk asset optimization TerserPlugin

Cool then, I think its indeed true that terser v2 consumes more memory. Now I wonder if we can help something for this. I think it's also very related to https://github.com/terser/terser since this is just a plugin that wraps terser to webpack. I dont know much about terser internal codebase so I guess we can only be patient πŸ˜‰

Might be webpack-plugin specific too, according to terser/terser#164

@ahmedelgabri can you try to increase nodejs memory usage? Sometimes it is impossible to solve without increase memory

So I tried that, it worked but the build now takes 19mins!

NODE_ENV=production node --max_old_space_size=8000 ./node_modules/.bin/webpack --progress --loglevel notice

Doesn't reverting the terser-webpack-plugin version fix the issue? Or disabling parallelism? Terser has been using the same amount of memory since it was forked off UglifyJS.

I'm looking into ways of using less RAM, but they are going to take a while because the current status quo is very memory efficient. The only thing that might be changed is the fact that we load all the AST into RAM. While that is the case, more JavaScript means more AST nodes which means more RAM.

@filipesilva has done a memory assessment of Terser and it turns out that basically all memory allocation is in the parse phase: the part which creates the AST nodes. I've already changed some booleans in the AST to bitfields, but this didn't help much.

Doesn't reverting the terser-webpack-plugin version fix the issue? Or disabling parallelism?

In my case none of this worked, the only thing that worked was disabling sourcemaps (which is not an option) or increase memory.

@ahmedelgabri so you've tried to set parallel: false?

Heya, I made a repo with some benchmarks comparing terser-webpack-plugin@2.2.2 and terser-webpack-plugin@1.4.2, as well as using parallel: 2 and parallel: false, on a codebase that produces around 300 small lazy chunks. You can find the repro in https://github.com/filipesilva/terser-webpack-plugin-143.

For this codebase I couldn't really see a memory regression on 2.2.2. If anything it used around 10% less memory for the parallel: 2 case, and 5% less memory for the parallel: false case. Below are the full results copied from the README.md.


terser-webpack-plugin-143

Repro for #143.
This is a stripped down version of https://github.com/filipesilva/ng-speed-rebuild.

Repro steps:

npm clone https://github.com/filipesilva/terser-webpack-plugin-143
cd terser-webpack-plugin-143
npm install
npm run ngc
npm ls terser terser-webpack-plugin
npm run benchmark

This will show you used versions of terser and terser-webpack-plugin and get you benchmark numbers when using terser-webpack-plugin@2.2.2.
The numbers I got were:

[benchmark] Benchmarking process over 5 iterations, with up to 5 retries.
[benchmark]   npm run webpack (at D:\sandbox\terser-webpack-plugin-143)
[benchmark] Process Stats
[benchmark]   Elapsed Time: 48539.20 ms (46150.00, 50034.00, 48728.00, 48844.00, 48940.00)
[benchmark]   Average Process usage: 3.85 process(es) (3.85, 3.85, 3.85, 3.85, 3.85)
[benchmark]   Peak Process usage: 5.00 process(es) (5.00, 5.00, 5.00, 5.00, 5.00)
[benchmark]   Average CPU usage: 29.55 % (28.12, 29.36, 30.42, 30.40, 29.41)
[benchmark]   Peak CPU usage: 170.62 % (165.60, 146.90, 182.90, 171.80, 185.90)
[benchmark]   Average Memory usage: 561.19 MB (572.04, 582.71, 542.59, 532.05, 576.56)
[benchmark]   Peak Memory usage: 1058.25 MB (1086.04, 1109.49, 1021.25, 996.13, 1078.35)

Disabling parallelism by changing parallel: 2 to parallel: false in webpack.config.js shows the following numbers:

[benchmark] Benchmarking process over 5 iterations, with up to 5 retries.
[benchmark]   npm run webpack (at D:\sandbox\terser-webpack-plugin-143)
[benchmark] Process Stats
[benchmark]   Elapsed Time: 53565.20 ms (52965.00, 52738.00, 52954.00, 54683.00, 54486.00)
[benchmark]   Average Process usage: 2.95 process(es) (2.95, 2.95, 2.95, 2.95, 2.95)
[benchmark]   Peak Process usage: 3.00 process(es) (3.00, 3.00, 3.00, 3.00, 3.00)
[benchmark]   Average CPU usage: 18.65 % (18.02, 19.33, 19.56, 18.32, 18.05)
[benchmark]   Peak CPU usage: 137.50 % (121.90, 135.90, 161.00, 129.70, 139.00)
[benchmark]   Average Memory usage: 458.26 MB (452.28, 452.60, 459.27, 467.64, 459.52)
[benchmark]   Peak Memory usage: 789.43 MB (812.58, 748.88, 812.85, 821.44, 751.42)

Set parallel: 2 again then follow these commands to get numbers for terser-webpack-plugin@1.4.2:

npm install terser-webpack-plugin@1.4.2 -DE
npm ls terser terser-webpack-plugin
npm run benchmark

The numbers I got were:

[benchmark] Benchmarking process over 5 iterations, with up to 5 retries.
[benchmark]   npm run webpack (at D:\sandbox\terser-webpack-plugin-143)
[benchmark] Process Stats
[benchmark]   Elapsed Time: 47165.40 ms (48127.00, 45810.00, 47589.00, 48426.00, 45875.00)
[benchmark]   Average Process usage: 4.14 process(es) (3.85, 3.85, 5.28, 3.87, 3.86)
[benchmark]   Peak Process usage: 5.60 process(es) (5.00, 5.00, 8.00, 5.00, 5.00)
[benchmark]   Average CPU usage: 28.86 % (29.80, 27.49, 29.44, 29.49, 28.06)
[benchmark]   Peak CPU usage: 156.60 % (151.60, 151.70, 168.70, 162.60, 148.40)
[benchmark]   Average Memory usage: 728.12 MB (619.87, 584.03, 1303.96, 569.29, 563.47)
[benchmark]   Peak Memory usage: 1495.66 MB (1251.39, 1172.98, 2721.88, 1175.47, 1156.58)

Disabling parallelism I got the following numbers:

[benchmark] Benchmarking process over 5 iterations, with up to 5 retries.
[benchmark]   npm run webpack (at D:\sandbox\terser-webpack-plugin-143)
[benchmark] Process Stats
[benchmark]   Elapsed Time: 54298.40 ms (55772.00, 52188.00, 55141.00, 52717.00, 55674.00)
[benchmark]   Average Process usage: 2.95 process(es) (2.95, 2.95, 2.95, 2.95, 2.95)
[benchmark]   Peak Process usage: 3.00 process(es) (3.00, 3.00, 3.00, 3.00, 3.00)
[benchmark]   Average CPU usage: 18.37 % (20.50, 17.37, 18.00, 16.96, 19.03)
[benchmark]   Peak CPU usage: 139.40 % (181.30, 115.70, 118.70, 117.30, 164.00)
[benchmark]   Average Memory usage: 458.09 MB (464.41, 456.70, 459.70, 461.76, 447.88)
[benchmark]   Peak Memory usage: 838.69 MB (840.65, 840.51, 843.47, 830.24, 838.60)

On all tests npm ls terser terser-webpack-plugin showed terser@4.4.2 was in use.

I also wanted to try with the original repro using the angular-devkit-benchmark-0.800.0-beta.18.tgz benchmark package inside the repro in my comment above, but following the original instructions I got the error below:

$ yarn
yarn install v1.17.3
glob error { [Error: EPERM: operation not permitted, scandir 'D:\sandbox\origin\packages\contracts\releases\latest']
  errno: -4048,
  code: 'EPERM',
  syscall: 'scandir',
  path: 'D:\\sandbox\\origin\\packages\\contracts\\releases\\latest' }
error An unexpected error occurred: "EPERM: operation not permitted, scandir 'D:\\sandbox\\origin\\packages\\contracts\\releases\\latest'".
info If you think this is a bug, please open a bug report with the information provided in "D:\\sandbox\\origin\\yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.

Maybe @nick can try.

Got a similar problem when setting sourceMap: true.

For me, the solution was to start using node.js v13.3 instead of v12.6.

Details of my case:
For terser-webpack-plugin v2 the error was:

ERROR in app.38d38ada94a68e7e1550.js from Terser
Error: Call retries were exceeded
    at ChildProcessWorker.initialize (/opt/atlassian/pipelines/agent/build/confluence-office365-frontend/node_modules/jest-worker/build/workers/ChildProcessWorker.js:193:21)
    at ChildProcessWorker.onExit (/opt/atlassian/pipelines/agent/build/confluence-office365-frontend/node_modules/jest-worker/build/workers/ChildProcessWorker.js:263:12)
    at ChildProcess.emit (events.js:210:5)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)

For terser-webpack-plugin v1 the error was just:

Killed
npm ERR! code ELIFECYCLE
npm ERR! errno 137

According to the node.js changelog, they've updated V8, which appears to save some memory.

Regarding Terser's use of memory, I've looked at a more efficient (CPU and memory as well) parser, https://github.com/meriyah/meriyah, which seems to use 85% as much memory as Terser does (I tested this in node 12. In node 13, they both use less memory but the proportion remains at 85%). This is indeed a reduction, but I don't see switching to this new parser as much of a fix.

Terser uses around 73mb to parse terser's dist bundle three times (and store the parsed trees in memory). Meriyah uses around 61mb to achieve the same task. I've added Babel's parser, babylon, for comparison, and it uses around 114mb. Terser's bundle is around 344kb and is minified and mangled, making it a pretty dense payload.

What would really fix this is to perform optimization on a per-module basis. If we throw enormous chunks of javascript into something that needs to parse everything into an AST, without bumping up the node memory limit, we're going to run into problems, no matter what we choose to throw it at. Meriyah would crash with out-of-memory. Babel would crash with out-of-memory. Terser crashes just the same. Parsing per-module is what Rollup has been doing, and these crashes don't happen at all, because modules simply are never this large. Please consider this change, which, besides being more resilient, allows for better parallelism since the workload is more granular.

@fabiosantoscode

What would really fix this is to perform optimization on a per-module basis. If we throw enormous chunks of javascript into something that needs to parse everything into an AST, without bumping up the node memory limit, we're going to run into problems, no matter what we choose to throw it at. Meriyah would crash with out-of-memory. Babel would crash with out-of-memory. Terser crashes just the same. Parsing per-module is what Rollup has been doing, and these crashes don't happen at all, because modules simply are never this large. Please consider this change, which, besides being more resilient, allows for better parallelism since the workload is more granular.

I described above why it is impossible, sorry, a lot of plugins can include own content and we generate runtime what should be minimized too, so it is big breaking change and it will be very bad for DX

For reference, #104 is the tracking issue for per-module parsing and contains more context on this topic.

@fabiosantoscode why do not switch on acorn for parsing? It is very stable and fast and consumes less memory

@evilebottnawi in my test it uses just 1mb less than meriyah.

@fabiosantoscode will be great to look on table like: current parser vs meriyah vs acorn

@evilebottnawi is it possible to minify the external code as a whole, without including the internal code?

For example:

(function (modules) {
  // non-module code goes here
})(the_modules_deadbeef)

Where deadbeef is a randomly generated hex string.

Then the_modules_deadbeef can be replaced with the real modules (which I suppose don't have to be minified together with everything else). I can guarantee you that, the_modules_{random string} will never be mangled, since every mangler worth its salt takes care not to mangle globals.

No time for tables but here's the three parsers doing the same thing (parse 3 times and store in different global variables, then console.log(process.memoryUsage())):

fabio@fabio-thinkpad β™₯  node --expose-gc acorn.js 
{
  rss: 105439232,
  heapTotal: 92872704,
  heapUsed: 60534872,
  external: 820380
}
fabio@fabio-thinkpad β™₯  node --expose-gc meriyah.js 
{
  rss: 109379584,
  heapTotal: 87523328,
  heapUsed: 61681920,
  external: 1582168
}
fabio@fabio-thinkpad β™₯  node --expose-gc terser.js 
{
  rss: 130007040,
  heapTotal: 108511232,
  heapUsed: 73331712,
  external: 820380
}

(--expose-gc is passed so that the code can call gc() before inspecting memory usage)

@evilebottnawi is it possible to minify the external code as a whole, without including the internal code?

No, we can't minify code by modules, honestly I'm tired of explaining that this is not possible (it is possible, but it is very ineffective), we do a lot of optimization like module concating/remove unused imports,exports/a lot of optimization, we don't known in which position other developer include own code, also developers can modify any part of code when we do compilation + we have own runtime code. For full optimization and very effective, we need to run the minifier on the whole file.

No time for tables but here's the three parsers doing the same thing (parse 3 times and store in different global variables, then console.log(process.memoryUsage())):

Looks acorn allow us to decrease memory consumption. Why acorn? Because webpack/rollup/parcel use that parser as standard and it is allow to reduce node_modules size.

Also he is very stable and have good API and production usage.

@evilebottnawi thanks for making that clear. I didn't know the whole story, that's why I asked :)

Does webpack at any point in time hold an entire chunk in ESTree form? Or is the code turned into a string before being passed to Terser?

Acorn does improve memory consumption, but it's a big deal to change terser to use the standard ESTree AST. Terser currently uses its own AST and it would be a massive undertaking to change to the standard AST. Just look at how large Terser's lib/compress/index.js is :). I think it would be great to use a standard AST (we could drop our bespoke JS parser and stringifier and probably leverage more help from the community due to the familiar format), but, again, it would be incredibly hard. Besides, look at the memory usage numbers. It's just a ~15% improvement on node 13.

What can be done is to try and improve Terser's memory consumption. We can probably get some of that 15% improvement.

@filipesilva sorry for not responding to your comment, I've cloned your repo and all but didn't reply at the time :) thanks for looking into this.

@evilebottnawi found the leak! It is a tiny leak unless you pass a gigantic chunk to Terser. I'll plug it when I can and ship.

Terser 4.4.3 released. I think this can be closed now. I have identified a new way to save memory and will open an issue here.

Please try https://github.com/webpack-contrib/terser-webpack-plugin/releases/tag/v2.3.1, should be faster and consume less memory, feedback welcome

/cc @filipesilva

Also @evilebottnawi if the input is an ESTree AST, Terser can handle that as well. It has a ESTree-to-terser converter which can be used if the input is already an AST.

With 2.3.1 and terser 4.4.3, we're still seeing "Error: Call retries were exceeded" occasionally (not consistently). Our biggest chunk is 700k. I'm going to try giving webpack more memory, but I wanted to let you know that those versions are still broken for us (we upgraded today and started seeing this...)

@aaronjensen maybe you can create reproducible test repo?

@evilebottnawi Sorry, but I don't see how I could. The code is closed source and it's likely the specifics of the code that's causing the issue. Furthermore, I cannot reproduce it locally, it only happens on CI and only occasionally.

@aaronjensen can you reproduce it by creating a regular build (without eval('source code of module') in each module) and passing it on to Terser?

Also get this error on CircleCI with this container 2CPU/4096MB

Setup

Node 12.14.0
Yarn 1.21.1
webpack 4.41.2
terser-webpack-plugin 2.3.1
terser 4.4.3
parallel-webpack 2.4.0


--max-old-space-size=3400

Compile in two threads by parallel-webpack

[00:22:58] Webpack (94%) - Optimize modules (after asset optimization)
[00:22:58] Webpack (95%) - Optimize modules (after seal)

[00:22:58] Webpack: Finished after 62.969 seconds.

[WEBPACK] Build failed after 65.162 seconds
[WEBPACK] Errors building [name]-[chunkhash].js
vendors-1b8c608ee4b81e2f348b.js from Terser
Error: Call retries were exceeded
    at ChildProcessWorker.initialize (/home/circleci/main/node_modules/jest-worker/build/workers/ChildProcessWorker.js:193:21)
    at ChildProcessWorker.onExit (/home/circleci/main/node_modules/jest-worker/build/workers/ChildProcessWorker.js:263:12)
    at ChildProcess.emit (events.js:210:5)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)

The error appears when there is no cache in node_modules/.cache

Total size of assets: 12MB
Total count of files: 314

three largest files:

vendors.js - 2.7MB
vendors.js.gz - 782kb

lite-common.js - 1.7MB
lite-common.js.gz - 494kb

bundle_editor.js - 723kb
bundle_editor.js.gz - 198kb

Increasing the container to 3CPU/6144MB with --max-old-space-size=5900 solved this problem

So far it has not been possible to make a public demo

Something wrong with parallels on CI, maybe bug in jest-worker

@korya #issuecomment-546160327 helped me a lot! Thanks!

It is happens only on CI?

It is happens only on CI?

Yes, for me it only happend in CI, we tried to do the exact same thing as CI does locally. But locally it just worked and CI crashed. The turning off of the parallel feature of module terser-webpack-plugin worked for me.

@martijn10kb can you provide information about environment in CI (os, version, count of CPU, memory, etc)?

I'm seeing a similar issue, but what I'm currently more concerned about is that when the build process fails because of this, the webpack process still returns 0:

    ERROR in inbox-components-Root.46040ed2466b99251963.js from Terser
    Error: Call retries were exceeded
        at ChildProcessWorker.initialize (/home/circleci/app/node_modules/jest-worker/build/workers/ChildProcessWorker.js:193:21)
        at ChildProcessWorker.onExit (/home/circleci/app/node_modules/jest-worker/build/workers/ChildProcessWorker.js:263:12)
        at ChildProcess.emit (events.js:210:5)
        at ChildProcess.EventEmitter.emit (domain.js:475:20)
        at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)
    
    ERROR in main.301244aeb2a335766683.js from Terser
    Error: Call retries were exceeded
        at ChildProcessWorker.initialize (/home/circleci/app/node_modules/jest-worker/build/workers/ChildProcessWorker.js:193:21)
        at ChildProcessWorker.onExit (/home/circleci/app/node_modules/jest-worker/build/workers/ChildProcessWorker.js:263:12)
        at ChildProcess.emit (events.js:210:5)
        at ChildProcess.EventEmitter.emit (domain.js:475:20)
        at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)
[15:45:52] Finished 'bundle' after 95944 ms
[15:45:52] Finished 'build' after 95961 ms
Done in 97.35s.
CircleCI received exit code 0

Is this a misconfiguration on my end, or is this some sort of bug?

@tstirrat15 can you create reproducible test repo, looks like bug in jest-worker

Mmmmmaybe? I'd be very surprised if it weren't very sensitive to the environment. Is it worth opening a bug over on that repo?

We came to the conclusion that this is being caused by the worker-farm dependency, which detects the CPU count using require('os').cpus().length.

Screenshot 2020-01-03 at 11 43 24

On our medium Circle CI instances this returns 36, but there are 2 CPUs on that resource class:

Screenshot 2020-01-03 at 11 39 40

Adding this to our onCreateWebpackConfig function made the problem go away.

config.optimization.minimizer[0].options.parallel = false

BTW, worker-farm seems a little abandoned: only two commits ever, and no maintenance at all in more than three years. For the maintainers here it might be worth considering a migration to node-worker-farm, which while it does seem to have the same CPU count detection issue, will probably be more worthwhile for one of us to submit a bug report to about it.

I guess it could also be the case that require('os').cpus().length is perfectly correct for every use case except this plugin. Maybe it could be as simple as overriding that maxConcurrentWorkers default if process.env.CIRCLECI === true?

@hencatsmith in new version we don't use worker-farm, we migrate on jest-worker

@tstirrat15 Can you try #203 ? It should be show normal error

Nice going @hencatsmith.

Does jest-worker make the same miscount?

os.cpus().length is also not the best way to count CPU cores. It counts hyperthreads, which, for truly CPU bound applications is not OK. This package retrieves the correct count: https://www.npmjs.com/package/physical-cpu-count

Does jest-worker make the same miscount?

I don't think what is miscount. If we have the ability to use 36 threads, why don't we use them?

@aaronjensen we have the option for this so you can setup a right value

Yeah @fabiosantoscode i think i see the same implementation in jest-worker

IMG_0455

But I’ll have to wait and double check if the issue persists in the actual Gatsby + Circle CI environment I first encountered it in next week at the office. It should be easy enough to confirm by removing that parallel = false config line and running a couple of test builds. Does anyone happen to know if Gatsby already pulled in a newer version of terser-webpack-plugin with the new dependency?

If the 36 is an issue in jest-worker too then we can just open an issue there. That said, I think this can be closed?

Possibly yeah! Might be good to fix this as deep as possible in the dependency tree as it’s probably quite widespread in its impact

I created a project that reliably reproduces out-of-memory errors when using terser-webpack-plugin 2.3.2: https://github.com/cjlarose/terser-webpack-plugin-out-of-memory

If you're using CircleCI or another execution environment where os.cpus().length returns many more CPUs than are allocated to your container, I recommend strongly setting the parallel option of TerserWebpackPlugin explicitly (setting it to the number of available cores corresponding to your resource class is a good rule-of-thumb).

// webpack.config.js
const path = require('path');
const TerserPlugin = require('terser-webpack-plugin');

module.exports = {
  entry: () => {
    ...
  },
  output: {
    path: path.resolve(__dirname, 'dist'),
    filename: '[name].bundle.js'
  },
  optimization: {
    minimizer: [
      new TerserPlugin({ parallel: 8 }),
    ],
  },
};

This will help avoid ENOMEM errors that happen because terser-webpack-plugin tries to fork too many processes. It will help avoid some instances of JavaScript heap out of memory errors, but not all of them. For projects that have many webpack entries (and especially if they're big), setting parallel explicitly is not sufficient in avoiding JavaScript heap out of memory errors.

I proposed a fix in #206.

I think I was able to solve this problem, big thanks @cjlarose for inspiration and great idea, honestly I don’t know how I didn’t get to this decision before. Now I run tests to put them here, I think today we release a patch version and close that problem.

I want to warn that this reduces consumption, but for real big projects, for example 5000+ entries, you need increase node memory usage

Release will be today, evening, want to add more tests (increase coverage)

Hey guys just want to comment that this parallism is still unstable but it honestly seems very close to the bare metal here.

I'm on a ryzen 5800 and Ubuntu 20.04. If i let it figure out parallel itself, my cpu and ram max out and the process ends up killed. If I use say an env variable and set it to 6, i borderline max memory usage.

If i run this on one of our web nodes which are typically 4 core amazon t3 instances running 18.04, the build seems to work just fine.

All were ran on node v14.17.0 and the latest webpack and babel versions.

I have limited understanding as to what is happening under the hood but I can say that we have many 'entry' apps, around 30+. If I cull these entries just for sanity check purposes it does seem to help but I still max out and die.

There's something going on here with how it forks and utilizes system resources and like I said it may be only affecting my hardware group here.

If anyone is finding there way to this comment here's an example of how I solved my issue:

  let parallel = 4;

  if (typeof process.env.WEBPACK_PARALLELISM !== 'undefined') {
    parallel = parseInt(process.env.WEBPACK_PARALLELISM);
  }

  config.optimization = {
    minimize: true,
    minimizer: [new TerserPlugin({
      parallel: parallel
    })],
  };