Async encode
Opened this issue · 2 comments
fengb commented
The initial encoder simply ran the top-level function as a generator. This is a problem for a number of reasons:
- Encodes into a separate buffer instead of being able to save directly into a stream.
- Assumes a specific size and explodes if it's wrong.
- Does not handle recursion at all.
A better architecture needs the following:
- Inject the writeable buffer each iteration.
- Allow suspend/resume anywhere along the callstack. The ultimate test is trickling data into a 1-byte buffer.
fengb commented
Braindump:
const EncodeContext = struct {
suspended: anyframe,
encodeBuffer: []u8,
out: ?[]u8,
fn next(self: *Self, buffer: []u8) ?[]u8 {
if (self.suspended) |frame| {
resume self.frame;
return self.out;
}
return null;
}
}
fn encodeInto(self: Self, ctx: *EncodeContext) void {
// Return value
ctx.suspended = @frame();
const len = write(ctx.buffer);
ctx.output = ctx.buffer[0..len];
suspend;
}
fengb commented
After refactoring a bit, I've found a few things:
- async really belongs in the buffer consumer. If
encodeInto
takes any writable (any struct withwrite([]u8) !void
), the write() can be responsible for all the async shenanigans. - Blocking
write([]u8)
is antithetical to multithreading, since each field must be able to write directly into an offset, e.g.write(offset: usize, bytes: []u8)