sindresorhus/public-ip

cluster error: [Error: Request timed out]

Closed this issue · 10 comments

When starting a fleet of instances (e.g. -i 4) with pm2, and public-ip runs inside each instance upon launch, only the first one succeeds. All the other ones report this error:

[Error: Request timed out]

Example script:

require('public-ip').v4(function (err, ip) {
    if (err) return console.error('no public IP found', err);
    console.log('ip', ip)
});

pm2 start publicIp.js -i 4

Works for me with plain node:

❯ node publicip.js & node publicip.js & node publicip.js & node publicip.js  
[1] 48337
[2] 48338
[3] 48339
ip 49.237.136.96
ip 49.237.136.96
ip 49.237.136.96
ip 49.237.136.96
[1]    done       node publicip.js
[3]  + done       node publicip.js
[2]  + done       node publicip.js

I don't have time to look into this further unless you can provide a reproducable test case in plain node.

K i'll look into it

Works for me with plain node too.
Closing this.

So I found out that it's basically node cluster that has a conflict with publicIp. Namely getting the IP inside workers doesn't work after the first one (or first two sometimes). Something's definitely not right.
Try running:

node publicIpCluster.js 4

publicIpCluster.js

const publicIp = require('public-ip');
const cluster = require('cluster');
const numInstances = process.argv[2] || require('os').cpus().length;

if (cluster.isMaster) {
  for (var i = 0; i < numInstances; i++) 
    cluster.fork();
  cluster.on('exit', (worker, code, signal) => {
    console.log(`worker ${worker.process.pid} died`);
  });
} else {
  // Workers
   publicIp.v4(function (err, ip) {
    if (err) return console.error('no public IP found', err);
    console.log('ip: ', ip)
  });
}

Yields:

ip:  198.8.80.79
no public IP found [Error: Request timed out]
no public IP found [Error: Request timed out]
no public IP found [Error: Request timed out]

(tested on my Macbook and on servers too)

I don't really have any experience with clusters. If I were to guess, it's something with the native-dns module depended on in this module.

Maybe @silverwind could can shine some light on this.

You might want to use ipify instead for now.

native-dns is unmaintained and broken in many ways, I suggest we create the dns packet manually.

Or, use setServers and require 0.12+.

use setServers and require 0.12+.

I'm fine with requiring 0.12. Any hints how I would use it in this case? I've never touched the dns module.

I think 0.12+ is fine too.

You may create a new branch that leaves the repo in the current state and so people on older node can still use this version (though without cluster support)