Compression support with "deflate-frame" protocol extension
nilya opened this issue · 12 comments
Hello.
WebSockets have protocol extension "deflate-frame" for per frame data compression [1].
Webkit and Chrome already support it (see [2] and Chrome sends request header "Sec-WebSocket-Extensions: x-webkit-deflate-frame").
Do you have plans to implement compression in WebSockets?
[1] http://tools.ietf.org/id/draft-tyoshino-hybi-websocket-perframe-deflate-05.txt
[2] http://www.ietf.org/mail-archive/web/hybi/current/msg09463.html
I didn't have plans to implement this. Not that I'm against it but it's far from being top of my to-do list. Would you like to have a go at implementing it?
I'm not going to implement this; I don't have the time or the inclination and this extension has been superseded by permessage-deflate
in any case.
Do you plan to support permessage-deflate
eventually?
There's a branch where I began working on it but it looked like quite a lot of work and I'm not sure about the design. It's not high up my todo list.
Part of the problem is that I don't know much about DEFLATE and I couldn't get Node's zlib
module to produce the same output as the examples in the spec.
I'll have a stab at it in the weekend.
@zzattack If you do look into this, you should be working on https://github.com/faye/websocket-driver-node, not this repo. I'd be interested in whether you think it's easier to implement on master
or the stream-parser
branch.
If possible, I don't think this should even be in websocket-driver
at all -- it's unsustainable to keep adding extensions to that repo that I have to maintain, and I don't know what will be added in future. In keeping with the modular design of driver
, I'd like to see whether this could be implemented by adding an extension API to the driver
interface so that extensions can be added by third parties. Of course, we'd probably need to build one extension to understand what that API should be; so you could work on getting the logic right (use Autobahn to test your implementation) and then we could figure out how to extract it.
This design is even more important in the Ruby version, where websocket-driver
supports multiple servers and so making it extensible allows people to implement extensions and solve the problem once across many different servers.
From my initial look at the spec it seems like an extension API would need to be capable of the following:
- Add extensions to a registry held by an individual driver instance
- On handshake, select applicable extensions to activate, based on the
Sec-WebSocket-Extensions
header and the precedence rules for extensions, e.g. no two extensions should conflict on their use of the RSV bits - Activate the selected extensions, passing in any parameters specified by the client
- Generate a
Sec-WebSocket-Extensions
reply header based on the activated extensions and the parameters they have negotiated - Allow extensions to refuse to activate due to incorrect or unacceptable parameters
- Notify the user of errors if an acceptable set of extensions cannot be activated, both on the server and on the client
- Notify selected extensions of the beginning of a new frame or message so the extension can reset its parser
- Pass complete or streamed frames through extensions for transformation, as indicated by the frame's RSV bits
- Pass complete or streamed messages through extensions for transformation, as indicated by the message's initial frame's RSV bits
- Probably a lot more things I've not thought of
You should consider the design philosophy behind websocket-driver
as explained on my blog; it uses tell-don't-ask style OOP so that the driver can make sure that whatever it is connected to behaves correctly. It should be possible for users to deploy it correctly without knowing any of the details of how the protocol works; this also helps it adapt to new versions of the protocol.
If you have any questions about the existing architecture, please ask.
I'm also looking into implementing permessage-deflate sometime soonish. Is your opinion still the same that the right way to do this is via a separate extension module?
One thing that's notable about it is that since the compress/decompress step is async, there'll need to be an extra buffering queue in both in-bound and out-bound directions to ensure that messages are emitted/written in the right order. Perhaps this is easier on the stream-parser branch; eg, you could just asynchronously not call the next _parseOpcode
until after the inflation is done. Is the stream-parser branch stable enough that it's worth developing on top of?
@glasser It doesn't necessary need to be a separate npm module, but we should architect things so that nothing about permessage-deflate per se is encoded in the core driver. This will help to drive out an API design for extensions in general.
That's a good point about the async compression API for zlib. The biggest stumbling block for me when I tried to implement this is that I couldn't figure out how to reproduce the examples in the spec using the Node zlib API (I'm not v familiar with how zlib works). I don't know which branch it would be easier to implement this on, my advice would be to try both and see what you find out.
Great. I guess part of my question is, is stream parser itself finished enough that basing work on it won't introduce other issues?
@glasser I'm not sure I'd call it 'finished', I original made it entirely to support doing extensions, but then I didn't get round to doing permessage-deflate. My hunch is you're correct about the async argument, but you would need to rebase the branch on top of master since a lot has changed since I forked.