steelbrain/node-ssh

Correctly handle unexpected errors

Opened this issue · 9 comments

n1xx1 commented

I'm using openssh with windows and it looks like it doesn't behave like node-ssh/ssh2 expects and an unhandled exception is thrown.

events.js:292
      throw er; // Unhandled 'error' event
      ^
Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Client instance at:
    at Socket.<anonymous> (/usr/local/lib/node_modules/n8n/node_modules/ssh2/lib/client.js:745:12)
    at Socket.emit (events.js:315:20)
    at emitErrorNT (internal/streams/destroy.js:106:8)
    at emitErrorCloseNT (internal/streams/destroy.js:74:3)
    at processTicksAndRejections (internal/process/task_queues.js:80:21) {
  errno: -104,
  code: 'ECONNRESET',
  syscall: 'read',
  level: 'client-socket'
}

It looks like there isn't a generic "onerror" handler outside of the async functions, so node complains about an unhandled throw. There should be a way to always handle the error event even if outside of a single async function call, so that a faulty open-ssh implementation like the windows one doesn't crash the node process.

unrelated, but this usually happens when you do an ssh command during the connection. you should await the ssh.connect() call before running any ssh code

I ran into this because I sent the command sudo reboot which obviously terminates the connection, but it would be nice if I could catch the error instead of having my entire node process crash. This particular situation (running a reboot command) is relatively easy to workaround by using a different approach (or using the AWS SDK in my case), but I still think all errors should be catchable.

Thats a good pointer! I’ll look into this, thanks!

I was unable to reproduce this. For me it just gives an empty response and then closes connection

Here's the code I used
import fs from 'fs'

import { NodeSSH } from './index'

async function main() {
  const ssh = new NodeSSH()

  await ssh.connect({
    port: 22,
    host: '192.168.10.176',
    username: 'pi',
    password: '[redacted]',
  })

  ssh.connection?.on('error', (err) => {
    console.log('error', err)
  })

  console.log(await ssh.execCommand('echo hi'))
  console.log(await ssh.execCommand('sudo reboot'))
}

main()

process.on('unhandledRejection', (reason, promise) => {
  console.log('dying on unhandledRejection', reason, promise)
  process.exit(1)
})

process.on('uncaughtException', (err, origin) => {
  fs.writeSync(process.stderr.fd, `Caught exception: ${err}\nException origin: ${origin}`)
  process.exit(1)
})
and here's the output
~/P/s/node-ssh ❯❯❯ node --trace-warnings lib/cjs/test.js
{ code: 0, signal: null, stdout: 'hi', stderr: '' }
{ code: null, signal: null, stdout: '', stderr: '' }

Is anybody still experiencing this?

Yep, I still have this issue. I spun up a project with the dependency versions from the project where I have issues:

https://github.com/mikey-t/node-ssh-reboot-test

I put notes in the readme for how to repro. I'm using typescript and ts-node with node 18.18.0. Here's what I found:

If you omit the event handler:

ssh.connection?.on('error', (err) => {
  console.log('error', err)
})

Then try/catch is ignored and it can only be caught with:

process.on('uncaughtException', (err, origin) => { }

BUT, it only happens sometimes - maybe 1 out of every 5 to 8 times for me. I'm looking at the the code that handles the promise rejection and I'm puzzled as to why this would only happen sometimes. Got any ideas?

I built node-ssh within WSL (ubuntu) and linked/referenced that version and was unable to reproduce. So I built node-ssh on windows and linked and also wasn't able to reproduce. So then I unlinked and pulled the live npm version down again and was able to reproduce after about 5 tries.

I'm tempted to just wire up that ssh.connection.on in my other project and walk away since that seems to prevent catch blocks from being bypassed for some reason.

I have just stumbled over this same issue.

Calling ssh.dispose() always results in:

Error: read ECONNRESET
    at TCP.onStreamRead (node:internal/stream_base_commons:217:20) {
  errno: -104,
  code: 'ECONNRESET',
  syscall: 'read',
  level: 'client-socket'
}

This is with:

  • Ubuntu 22.04.4 LTS
  • Node v20.12.2
  • node-ssh@13.2.0
  • ssh2@1.15.0
    connecting to OpenSSH in Window 11.

Adding ssh.connection?.on('error', (err) => {}); before the disconnect suppresses the error.

The SSH connection is otherwise working perfectly. This is my minimum test case that reproduces the issue:

import { NodeSSH } from 'node-ssh';
(async () => {
    const ssh = new NodeSSH();
    await ssh.connect({
        host:           SERVER_HOSTNAME,
        username:       SSH_USERNAME,
        privateKeyPath: SSH_PRIVATE_KEY_PATH
    });
    await ssh.dispose();
})();

It doesn't matter whether any execCommand is performed inbetween. Neither does adding a delay.

Thanks for reporting @thoukydides

.dispose method internally calls .end() on the client connection as can be seen here:

this.connection.end()

As per ssh2's docs (https://github.com/mscdex/ssh2?tab=readme-ov-file#connection-methods) this is a valid method to call so the connection reset error here is really odd.

Maybe we should reproduce the above snippet using the underlying ssh2 APIs (https://github.com/mscdex/ssh2) and see if the error persists and if so, open an issue upstream

I can reproduce the same problem with ssh2 directly:

const { readFileSync } = require('fs');
const { Client } = require('ssh2');

const connection = new Client();
connection
    .on('ready', () => connection.end())
    .connect({
        host:       HOSTNAME,
        port:       22,
        username:   USERNAME,
        privateKey: readFileSync(PRIVATE_KEY_PATH)
    });

This appears to be mscdex/ssh2#850 and mscdex/ssh2#1018 (possibly a few others).

Perhaps a note in the README file would be appropriate?