ToothlessGear/node-gcm

Max registration ID's

leearmstrong opened this issue · 9 comments

Will node-gcm fail if I send it more than 1000 Registrations ID's in a single message as that is all Google support? Do I have to split it up myself before hand?

You'll have to split your ID's beforehand, as node-gcm doesn't do that for you automatically yet.

Perfect. Thanks!

On 30 Jan 2014, at 16:26, Marcus Farkas notifications@github.com wrote:

You'll have to split your ID's beforehand, as node-gcm doesn't do that for you automatically yet.


Reply to this email directly or view it on GitHub.

If anyone else needs to support more than 1,000 devices, you can easily split the tokens up into batches like this:

// Max devices per request    
var batchLimit = 1000;

// Batches will be added to this array
var tokenBatches = [];

// Traverse tokens and split them up into batches of 1,000 devices each  
for (var start = 0; start < tokens.length; start += batchLimit) {
    // Get next 1,000 tokens
    var slicedTokens = tokens.slice(start, start + batchLimit);

    // Add to batches array
    tokenBatches.push(slicedTokens);
}

// You can now send a push to each batch of devices, in parallel, using the caolan/async library
async.each(batches, function (batch, callback) {
    // Assuming you already set up the sender and message
    sender.send(message, { registrationIds: batch }, function (err, result) {
        // Push failed?
        if (err) {
            // Stop executing other batches
            return callback(err);
        }

        // Done with batch
        callback();
    });
},
    function (err) {
        // Log the error to console
        if (err) {
            console.log(err);
        }
    });

I wrote parallel-batch which does pretty much that: https://www.npmjs.com/package/parallel-batch

Nice package! Would greatly simplify the code I wrote.

@hypesystem maybe it would be a good idea to integrate your parallel-batch library into node-gcm, so that batching will be performed automagically?

I agree, that was the original intention with parallel-batch. As it turns out, though, it's easier said than done.

Specifically we want to return errors correctly (as if no batching happened; so if only some of the batches fail, some of the messages will still (possibly) be sent); and we want to handle retries the way the user expects (which is also hard).

I think it's doable -- in case one of the batches failed, we'll retry it until we run out of tries. We just have to make sure that GCM doesn't deliver the push notification to some devices in the batch while erroring out -- that would cause some serious spamming. But in any case, it could be happening today with < 1,000 devices in sender.send.

I would love to see you give it a try. First of all, though, I think we need a new issue to discuss this -- feel free to create it.

I will try to gather my thoughts on what behaviour I think we would want, and why, exactly, it is tricky :-)