marko-js/fastify

fastify streaming waiting till done (<await()> not firing)

J-Hoh opened this issue · 8 comments

J-Hoh commented

Version: 1.0.2

Details

The fastify server currently waits till the entire stream is done till it starts sending to the client. This causes any <await(promise) client-reorder=true> tag to be shipped, already fullfilled, negating any benefit.

Expected Behavior

As with the express plugin, streaming should start as soon as possible. Any await tags with reordering should also get sent out, with the stream staying open till the promises finish up serverside

Actual Behavior

The server stalls till the stream is completed, only then starting to send it to the client.

Possible Fix / Steps to Reproduce

Repro Repo
^ this repo also includes the fix to try out

Would be good to know why fastify isn't working with what we have since their docs say you can reply with a stream.

Either way it seems if we want to use the raw response we must also call https://www.fastify.io/docs/latest/Reference/Reply/#hijack

J-Hoh commented

Short update:
Looking into what fastify does when you pass a stream to the reply.send() method revealed that it's pretty much the same as a direct pipe into the raw stream, EXCEPT that it's first setting headers on the raw response.

It seems my "solution" only works when sending absolutely no headers (the fastify reply.type() does not get sent before the raw pipe!).
EDIT: Sending a "Content-Type" header with the stream makes it wait till the stream is done. However that poses another problem... my main project suddenly gets recognized as plain text, so no html rendering...

J-Hoh commented

Had another look and it seems like compression is the culprit...
The fastify-compression plugin does not handle html streams properly (who expects them to be used like marko does). This could be (temporarly) fixed by not compressing marko stream outputs via setting a header:
this.request.headers["x-no-compression"] = "true";
The Header is being checked by the plugin and - if present - no compression will happen.

This alone doesn't completely eliminate the problem tho, as in development mode, the webpack-dev-server somehow also compresses the passthrough'd stream (gzip) thus staggering/buffering the response.
The direct stream.pipe.(reply.raw) somehow circumvents this compression. reply.send(stream) does not.

Good find, I think the main issue is that fastify-compress does not expose an equivalent api to https://github.com/expressjs/compression#resflush.

We may want to make a PR there and/or figure out a way to achieve the same.

I need to test this, but we may be able to do something like:

stream.once("pipe", to => {
  if (to.flush) stream.flush = to.flush.bind(to)
});
J-Hoh commented

Will test around with that aswell, looks promising.

Another workaround would be to simply change how the fastify-compression compresses streams, as it does expose these options!

fastify.register(fastifyCompress, {
	zlibOptions: {
		// flush: constants.Z_PARTIAL_FLUSH, // EDIT: Deprecated, use Z_SYNC_FLUSH instead
		flush: constants.Z_SYNC_FLUSH,
	},
	brotliOptions: {
		flush: constants.BROTLI_OPERATION_FLUSH,
	},
})

These are literally the options passed to the stream compression functions from zlib (createBrotliCompress/createGzip/createDeflate), which by default would compress the entire stream once it reached its end, as that's the "best" way to compress data.
This might be a desired setting, but your find definitely makes it more easy to use.

So it turns out that because of the way they wrap the stream the flush api from the compression stream is not exposed (https://nodejs.org/dist/latest-v16.x/docs/api/zlib.html#flushing).

It would be great to figure out how to possibly enable this in fastify-compress and maybe we should make an issue over there.

Your configuration above is a good alternative, we could also just document that as a solve. I wonder if there is a way we could have it so that when a Marko template is rendered we automatically configure these? Also it seems like we should use Z_SYNC_FLUSH instead.

@J-Hoh I just updated the docs to highlight this issue. Thanks for bringing it up!

I can't find a way to automate this with the ways fastify-compress works currently and I'm not quite sure what'd be the best way to tweak that module also.

I do think simply changing the config for that module is a good enough work around though, and I'll make sure our examples are updated accordingly!