Coverage runs correctly locally but fails in GH with OOM/GC errors
MorenoMdz opened this issue · 4 comments
Describe a bug
Coverage runs correctly locally but fails in GH with OOM/GC errors.
Expected behavior
Coverage run to complete
Details
- Action version: v2 using checkout v2 or v3
- OS, where your action is running (windows, linux): ubuntu latest
name: 'coverage'
on:
push:
branches: [main]
pull_request:
branches: ['*']
jobs:
coverage:
permissions:
checks: write
pull-requests: write
contents: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Use Node.js 16.14.0
uses: actions/setup-node@v2
with:
node-version: 16.14.0
- uses: ArtiomTr/jest-coverage-report-action@v2
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
package-manager: yarn
test-script: yarn test
annotations: none
Additional context
PASS src/models/shift-requests/shift-requests.service.spec.ts (5.709 s)
<--- Last few GCs --->
[2189:0x52adb30] 389355 ms: Mark-sweep 1869.9 (2092.2) -> 1856.6 (2088.3) MB, 1727.1 / 0.3 ms (average mu = 0.142, current mu = 0.078) allocation failure scavenge might not succeed
[2189:0x52adb30] 390622 ms: Mark-sweep 1872.8 (2088.6) -> 1857.8 (2085.3) MB, 1204.8 / 0.0 ms (average mu = 0.105, current mu = 0.049) allocation failure scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xb09980 node::Abort() [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
2: 0xa1c235 node::FatalError(char const*, char const*) [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
3: 0xcf77be v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
4: 0xcf7b37 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
5: 0xeaf3d5 [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
6: 0xeafeb6 [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
7: 0xebe3de [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
8: 0xebee20 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
9: 0xec1d9e v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
10: 0xe832da v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
11: 0x11fc026 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
12: 0x15f0a99 [/opt/hostedtoolcache/node/16.14.0/x64/bin/node]
Aborted (core dumped)
error Command failed with exit code 134.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Running tests failed
Error: Error: The process '/usr/local/bin/yarn' failed with exit code 134
Error: The process '/usr/local/bin/yarn' failed with exit code 134
at rh._setResult (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:21:17898)
at rh.CheckComplete (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:21:17485)
at ChildProcess.<anonymous> (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:21:16477)
at ChildProcess.emit (node:events:520:28)
at maybeClose (node:internal/child_process:1092:16)
at Process.ChildProcess._handle.onexit (node:internal/child_process:302:5)
Running tests ended
Begin collecting coverage...
Loading code coverage from file: report.json
Collecting coverage failed
Error: Coverage output file not found. (file "report.json" not found)
Error: Coverage output file not found. (file "report.json" not found)
at L7 (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:2220:340)
at async ei (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:2221:158)
at async Py (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:2221:633)
at async /home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:2230:14870
at async ei (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:2221:158)
at async UD (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:2230:14806)
Collecting coverage ended
Begin parsing coverage...
Parsing coverage skipped
Parsing coverage ended
Head coverage collection failed
Error: Getting code coverage data failed.
Error: Getting code coverage data failed.
at Py (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:2221:[768](https://github.com/trabapro/traba-server-node/actions/runs/4502264733/jobs/7923837866#step:4:768))
at async /home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:2230:14870
at async ei (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:2221:158)
at async UD (/home/runner/work/_actions/ArtiomTr/jest-coverage-report-action/v2/dist/index.js:2230:14806)
Head coverage collection ended
Begin switching to base branch...
cc @ArtiomTr initially we got this one to run by setting the heap to 4096 as it shown below, but now it is breaking the same way again, any ideas what could be causing this?
name: 'coverage'
on:
push:
branches: [main]
pull_request:
branches: ['*']
jobs:
coverage:
permissions:
checks: write
pull-requests: write
contents: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Use Node.js 16.14.0
uses: actions/setup-node@v2
with:
node-version: 16.14.0
- uses: ArtiomTr/jest-coverage-report-action@v2
env:
NODE_OPTIONS: '--max_old_space_size=4096'
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
package-manager: yarn
test-script: yarn test
annotations: none
Hey @MorenoMdz 👋,
I'm sorry, I missed your issue. In the latest version of jest, memory consumption increased drastically. You can try to apply the fixes, described in that issue, or, you can use the new jest feature: shards. Although shards aren't supported natively, some people were able configure this action to use shards - here is a link to a comment
Ended up fixing it by increasing the heap size
env:
NODE_OPTIONS: '--max_old_space_size=8192'
@ArtiomTr would you consider changing/customising the way your action uses workerIdleMemoryLimit
? See https://jestjs.io/docs/configuration/#workeridlememorylimit-numberstring and jestjs/jest#11956.