olsonpm/sqlite-to-rest

Stream response

Closed this issue · 4 comments

Hi there,

Do you happen to have an example of how to make / handle a request that results in "large" response? Above a certain volume, when the server starts to attempt to stream the result, I can only get the first chunk of the response data. I am using axios to communicate with the REST server.

Cheers,
Nick

I don't remember whether I wrote a test for this case. If I didn't then you may have discovered a bug. I'll look into it when I get a moment. Real life is catching up like it tends to do.

just an update - I'm trying to get a test going to either fail or pass depending on whether this is a bug. I made progress but not done yet. I'll update again when I have more time to dig.

Great, thanks.

I'm currently working around this by fetching results in chunks, using the range header, until I get a "Range not satisfiable" error. If I knew how many rows there were to fetch ahead of time, it would be cleaner.

If there were some way to get the count() for a given query, that would be a pretty useful feature. The only way I could do that currently is writing a view for all my other tables and views that I would need a count for. The above approach is much less work :P

Cheers

Well I finally set aside some time to figure this out. My knowledge on streams is so poor, but I learned a lot. Mostly I had a fundamental misunderstanding of how streaming http responses worked, so I apologize for that glaring bug.

Anyway you should now be able to stream responses. Chunks pretty much come a row at a time. I didn't spend any time optimizing it because node-sqlite3 doesn't have a streamable interface built in so there aren't any great solutions. If performance becomes an issue then someone can tackle it in a future PR.

To explain the bug: I had logic to calculate the 'content-length' by piping the stream and adding each chunk's byte length to a running contentLength variable. However when you stream a response you're telling the client "hey, we don't know how large this thing is gonna be so I'm going to send you this data in chunks". So I wasn't actually streaming the response because I was waiting until all rows were iterated before sending the content-length header which wasn't giving koa a chance to stream.

What still doesn't make sense to me is how a server is supposed to communicate an error occurred when generating future chunks. From what I can tell the protocol doesn't account for this case so you'd have to communicate that on your own via the chunks and api documentation e.g. 'Each chunk will contain a json object { "data": "my chunk data" } unless an error occurrs in which case we'll send a chunk { "error": "reason" }. Still seems dirty because I would consider errors when generating chunks to be a part of the request and thus a success header of 200 is inappropriate.

Whatever though, I think my update is good'nuff heh.