NodeRedis/node-redis-parser

Large array lengths can cause an unhandled RangeError

nkochakian opened this issue · 6 comments

I'm seeing an unhandled exception in cases where the parser is given invalid data that happens to contain something that looks like an array. Part of the problem is that the parser will preallocate an array using whatever length parameter was given (valid or not) as seen here. If length happens to be a very large positive number, then this will result in a RangeError being thrown.

You can test this by having the parser read this buffer: Buffer.from("2AFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0D0A", "hex"). Add more bytes and length will eventually go to Infinity.

Something should be done to at least handle the potential exception and treat it as a parsing error.

@nkochakian are you using the parser directly in some way? Redis should always only return valid array length. As to RESP it will only reach that function in case it is told that the next data is going to be an array with that specific length. So e.g. *99999999999999\r\n[actual super long data]\n\n. But that is just plain out of range for JS.

Yes, it is currently being used by a server that uses RESP as its communication protocol by passing bytes from TCP sockets directly to the parser. I understand that this wasn't the intended use for the parser, and a Redis server shouldn't intentionally return invalid data, but I was expecting the parser to handle malformed input a little more gracefully.

This parser is built for plain speed and expects the passed in data to be valid. However, there is a simple approach that you can use to circumvent that problem.

Just wrap your parser call in a try / catch. In case of an error, handle that on your side and reset the parser as described in the README for the fatal exception. That should be just what you want.

One problem about handling error differently is that the parser would not know what to do. It could theoretically verify the input and emit an error in case of malformed input but that would a) increase the complexity b) reduce the performance and c) it has no benefit over just wrapping the call itself in a try catch.

Alright, thanks for the suggestion. An external try/catch block might be good enough for cases like this. I wasn't aware that the parser did much beyond automatically resetting its state after emitting an error?

In case there is an error the parser has to be reset (the documented function). Yes, that could be done automatically but handling this explicit is better because than the it is clear that it is done in the right way. Without resetting the parser the next valid chunk will be broken as well.