Reported Memory Leak
Closed this issue · 3 comments
Describe the bug
I am getting this error with unlighthouse/0.13.2 darwin-arm64 node-v20.16.0:
✔ Completed inspectHtmlTask for /webinar-vbid-model-cy-2025-application-process-office-hours. (Time Taken: 9.2s 196.08 KB 51% complete) Unlighthouse 2:26:06 p.m.
✔ Completed inspectHtmlTask for /sf5510-authorization-agreement-preauthorized-payments. (Time Taken: 23.1s 214.02 KB 52% complete) Unlighthouse 2:26:08 p.m.
(node:7707) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 unpipe listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListene
(Use `node --trace-warnings ...` to show where the warning was created)
(node:7707) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListener
(node:7707) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListener
(node:7707) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 finish listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListene
(node:7707) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 unpipe listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListene
(node:7707) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListene
(node:7707) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListener
(node:7707) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 finish listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListene
/Users/mgifford/.npm/_npx/b17dc4389256a785/node_modules/puppeteer-cluster/dist/util.js:69
throw new Error(`Timeout hit: ${millis}`);
^
Error: Timeout hit: 300000
at /Users/mgifford/.npm/_npx/b17dc4389256a785/node_modules/puppeteer-cluster/dist/util.js:69:23
at Generator.next (<anonymous>)
at fulfilled (/Users/mgifford/.npm/_npx/b17dc4389256a785/node_modules/puppeteer-cluster/dist/util.js:5:58)
at runNextTicks (node:internal/process/task_queues:60:5)
at listOnTimeout (node:internal/timers:545:9)
at process.processTimers (node:internal/timers:519:7)
Node.js v20.16.0
I am unclear why this would be an issue.
Sites scan normally, then the amount of memory consumed on the device skyrockets until the script crashes.
I don't know if it is a puppeteer or unlighthouse.dev problem. I have done nothing custom to either. Just a config.
Also, in the instance where it doesn't die on a different site:
✔ Completed inspectHtmlTask for /publications/. (Time Taken: 18.9s 174.3 KB 50% complete) Unlighthouse 2:32:19 p.m.
✔ Completed inspectHtmlTask for /od/. (Time Taken: 22.9s 78.14 KB 53% complete) Unlighthouse 2:32:19 p.m.
(node:15947) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 unpipe listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListen
(Use `node --trace-warnings ...` to show where the warning was created)
(node:15947) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListene
(node:15947) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListene
(node:15947) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 finish listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListen
(node:15947) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 unpipe listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListen
(node:15947) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListene
(node:15947) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListenr
(node:15947) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 finish listeners added to [WriteStream]. MaxListeners is 10. Use emitter.setMaxListen
✔ Completed runLighthouseTask for /oirm/. (Time Taken: 28.3s Score: 0.76 Samples: 1 56% complete) Unlighthouse 2:32:46 p.m.
✔ Completed runLighthouseTask for /staff/. (Time Taken: 29.2s Score: 0.87 Samples: 1 61% complete) Unlighthouse 2:32:47 p.m.
Reproduction
Yes...
System / Nuxt Info
System:
OS: macOS 14.5
CPU: (10) arm64 Apple M1 Max
Memory: 169.72 MB / 32.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.16.0 - ~/.nvm/versions/node/v20.16.0/bin/node
Yarn: 1.22.22 - /opt/homebrew/bin/yarn
npm: 10.8.1 - ~/.nvm/versions/node/v20.16.0/bin/npm
pnpm: 9.5.0 - /opt/homebrew/bin/pnpm
Browsers:
Brave Browser: 118.1.59.117
Chrome: 127.0.6533.120
Chrome Canary: 129.0.6666.1
Safari: 17.5
npx unlighthouse-ci --site https://www.whitehouse.gov --config unlighthouse.config-wh.ts
I have been working on this for a little bit now and while other jobs complete, we have this one still running consistently into this memory issue. We're running a github action on Ubuntu latest with node v.22. It always fails at the same point and reports as out of memory. I added some tracing and am attaching the logs from the most recent run. The code snippet below is the same for all three jobs, working in two and not in the one mentioned above.
logs_28210731332.zip
steps:
- uses: actions/checkout@v4
env:
NODE_OPTIONS: --max_old_space_size=8192
- run: npm init -y
- run: npm install
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- run: npm ci
- run: npm install -g @unlighthouse/cli puppeteer
- run: strace npx unlighthouse-ci
Still happening for me as well:
/home/user/.cache/pnpm/dlx/nyebsawq2skhwbmv3wuy7tb4uu/1925b7eae8d-33c875/node_modules/.pnpm/puppeteer-cluster@0.24.0_puppeteer@23.5.0/node_modules/puppeteer-cluster/dist/util.js:69
throw new Error(`Timeout hit: ${millis}`);
^
Error: Timeout hit: 300000
at /home/user/.cache/pnpm/dlx/nyebsawq2skhwbmv3wuy7tb4uu/1925b7eae8d-33c875/node_modules/.pnpm/puppeteer-cluster@0.24.0_puppeteer@23.5.0/node_modules/puppeteer-cluster/dist/util.js:69:23
at Generator.next (<anonymous>)
at fulfilled (/home/user/.cache/pnpm/dlx/nyebsawq2skhwbmv3wuy7tb4uu/1925b7eae8d-33c875/node_modules/.pnpm/puppeteer-cluster@0.24.0_puppeteer@23.5.0/node_modules/puppeteer-cluster/dist/util.js:5:58)
at runNextTicks (node:internal/process/task_queues:60:5)
at process.processTimers (node:internal/timers:511:9)
Node.js v20.12.2
Hi, was there ever a resolution to this issue as it is happening repeatedly with one of our sites?
Node.js: v20.18.0
unlighthouse-cli: v0.14.0
Seems to get to 99% and then throws the following error:
<path>/node_modules/puppeteer-cluster/dist/util.js:69
throw new Error(`Timeout hit: ${millis}`);
^
Error: Timeout hit: 300000
at <path>/node_modules/puppeteer-cluster/dist/util.js:69:23
at Generator.next (<anonymous>)
at fulfilled (<path>/node_modules/puppeteer-cluster/dist/util.js:5:58)
Node.js v20.18.0