How to read chunks from Transfer-Encoding: chunked
grandchamp opened this issue · 8 comments
Hi.
I'm trying to port a NodeJS app to .NET Core, but i'm hitting a problem on reading chunks from HttpContext.Request.Body.
On Node, the function
request.on('data', function(data){
});
receives the data in chunks.
However, i can't make
var routeBuilder = new RouteBuilder(app);
routeBuilder.MapRoute("{secret}/{width:int}/{height:int}", async context =>
{
});
work, because if i try to read context.Request.Body the read only ends when the client ends sending the request. How can i read the chunks async on .NET?
Can you show your code that actually reads the body? And what's the problem with waiting for the whole body? Is it large?
Http chunking is a transport detail to avoid calculating the total length, it's not surfaced to the application. Normally you call HttpContext.Request.Body.ReadAsync and it completes when it's filled the buffer you've given it, or you reach the end of the body. The servers have some leeway on this, but they should always block until they can return some data unless the body is finished.
Sure!
I'm basing my code on https://github.com/tahaipek/Nodcam/blob/master/stream.js. I'm translating node to C#.
So far, this code get's a video from webcam using FFMPEG (FFMPEG calls a URL and sends the video data in chunks). On every send of the FFMPEG, i'll stream this bytes to all websockets connected on server.
The code runs fine on node, and on 'data' events the chunks are correctly taken.
On C#, i'm using this (i'll omit for now the websockets):
var routeBuilder = new RouteBuilder(app);
routeBuilder.MapRoute("{secret}/{width:int}/{height:int}", async context =>
{
var secret = context.GetRouteValue("secret").ToString();
if (configuracoesStream.Value.SenhaStream.Equals(secret))
{
context.Response.StatusCode = 200;
var width = int.Parse(context.GetRouteValue("width").ToString());
var height = int.Parse(context.GetRouteValue("height").ToString());
var request = context.Request;
try
{
context.Request.EnableRewind();
using (var ms = new MemoryStream())
{
await context.Request.Body.CopyToAsync(ms);
await streamHandler.WebSocket.SendAsync(new ArraySegment<byte>(ms.ToArray()), WebSocketMessageType.Binary, true, CancellationToken.None);
}
}
}
}
catch (Exception ex)
{
var b = 1;
}
}
});
app.UseRouter(routeBuilder.Build());
This dont works, because CopyTo remains blocked until i hit ctrl+c on FFMPEG (causing the video capture stop) and the stream will never work because i'll stream the video only when it has completed.
However, if i use ReadAsync i'll have to provide an offset and a count, so i'll have to provide a fixed length and when i pass to https://github.com/tahaipek/Nodcam/blob/master/public/jsmpg.js it'll not work because the data could be corrupted (because one part could be on first 1024 and the rest on second 1024 bytes read).
On Node, this chunks are read correctly (eg: 537 bytes, 134 bytes, etc).
Yes, CopyToAsync is supposed to copy the entire stream.
Relying on the transport to pass your individual writes though intact is precarious at best. Any component from the client to server could split or merge those data chunks and cause havoc with the assumptions made in your app. That this worked on node seems more like luck than anything.
I don't see a simple description of the FFMPEG wire format, but the proper way to deal with streamed, formatted data like this is to only read it in known frame segment sizes. E.g. there is usually a fixed size header segment that you read first. From that segment you determine how large the remainder of the frame is and then read that.
What i think is that data event on Node is raised on each chunk written, and in .NET this is not possible (or i don't know how).
No you can't do that on Asp.Net servers. In practice it's not reliable because intermediaries can split or merge chunks.
What i think is that data event on Node is raised on each chunk written, and in .NET this is not possible (or i don't know how).
You can with CopyToAsync but you can't do it to a MemoryStream as that buffers the entire data set. You can implement your own stream and handle WriteAsync that's the moral equivalent of ondata. You still need to handle parsing the chunks yourself.
@davidfowl and @Tratcher thanks for your help. I'll dig some more based on your thoughts.
Closing this one.