Specify life time management
yutakahirano opened this issue · 33 comments
We need to specify the life time management, like WebSockets.
It is necessary because closing an active socket has side-effect that is observable from a remote peer that is not governed by the GC, so we don't rely on the GC. It's not general ReadableStream's concern, so we should specify it here.
My current idea:
When one of the following conditions meets, body
MUST not be garbage collected.
body@[[closedPromise]]
is pending and has one or more handlers.body@[[readyPromise]]
is pending and has one or more handlers.
This is annoying :P. (But necessary.)
Let me go a bit slower. I think you jumped to the end because you understand this stuff better. My perspective might not be that valuable because I don't understand the implementation complexities as well as you, so feel free to tell me I'm wrong :).
Here is my toy example that helps me understand the situation better:
// UA code, written here in JS, but trying to mimic C++ semantics
var resolve;
var foo = {
closedPromise: new Promise(r => resolve = r)
};
// Expose `foo` to author code, but not `resolve`:
giveAuthor(foo);
// author code
foo.closedPromise.then(() => console.log("closed successfully"));
The graph in this scenario is: resolve -> foo.closedPromise
. So even if author code stops using foo
and we GC foo
, foo.closedPromise
is not GCed until the UA code frees resolve
.
Now, I guess the main issue for a stream is that we want something like web socket's
If a WebSocket object is garbage collected while its connection is still open, the user agent must start the WebSocket closing handshake, with no status code for the Close message. [WSP]
e.g., "if body
is GCed while still open, cancel the stream."
Expanding the above toy example into an analogous one becomes:
// UA code
var resolve;
var foo = {
closedPromise: new Promise(r => resolve = r)
};
addWeakCallback(foo, function () {
resolve();
free(resolve); // we won't use it again.
});
giveAuthor(foo);
The stream version of this, I think, would be:
class FetchBodySource {
start(enqueue, close, error) {
this@[[closeStream]] = close;
}
...
}
var source = new FetchBodySource(...);
var body = new ReadableStream(source);
addWeakCallback(body, function () {
source@[[closeStream]]();
free(source);
});
exposeViaBindings(body);
// author code
body.closed.then(
() => console.log("closed successfully"),
e => console.error("errored", e)
);
Let's say that author code drops all references to body
. It gets GCed! We never touch it again, from either UA code or author code.
OK, cool. We still have source -> source@[[closeStream]] = body@[[close]] -> (resolve function for body@[[closedPromise]]) -> body@[[closedPromise]]
so that's still retained, even if body
is not.
Thus when UA code calls source@[[closedStream]]()
, that calls resolves body@[[closedPromise]]
(which is still retained even though body
itself is not!). Thus console.log("closed successfully")
will be triggered.
Does this make any sense? Or am I crazy and out of my depth and should leave this to the pros?
Thank you very much for your feedback!
You're right, this life time management is for the underlying source rather than for Readable[Byte]Stream itself. And I realized that such treatment is unnecessary if the author has both the read end and the write end. Moreover, we need to think about the underlying sink as well.
Clearly I need to think more carefully.
Let me describe my motivation.
Imagine that the browser is implemented in JS and all objects are governed by a single GC. Even in such a browser, a connection to outside of browser such as a socket is a GC root.
That means by default a socket will be kept alive while it is open and the socket will be open until closed by the author or the server.
// (1)
fetch(req);
In the above code, the author doesn't have a means to get any information from the connection. In such a case, I'd like to allow the system to close the socket. Closing a socket has side-effect but it is not observable to anyone in the browser and the server doesn't care about it. So I'd like to allow that.
// (2)
fetch(req).then((res) => res.body.closed).then(function() {
console.log('closed');
}, function() {
console.log('errored');
});
In the above case, the socket MUST not be closed automatically, because the author can detect the closed / errored timing. Note that the author doesn't have a means to pull data from the response, so the connection will stall if the back pressure mechanism takes effect.
// (3)
fetch(req).then((res) => res.body.closed).then(function() {});
Theoretically this is equivalent to (1) but I'm afraid we cannot detect it.
Oh, I understand now. This is another "we don't want to expose GC behavior" thing.
Because if we did allow that, then we could just use the scheme from my above comment. The browser would automatically close the stream (which is OK because all references to req.body
are gone, so the author will never be able to read from it). And the author would get notified that it is closed via the handlers they have registered. But that is bad.
OK. Now that I understand the problem I think your proposed solution is good. The only other alternative I can think of would be not adding the close-stream-on-GC behavior, and forcing users to manually manage their resources. Maybe that is OK?
In ES spec speak you can say "promise has any entries in the List that is the value of its [[PromiseFulfillReactions]] internal slot" (similarly for [[PromiseRejectReactions]]). Probably can simplify that to just "promise@[[PromiseFulfillReactions]] is not empty." Note that once the promise becomes fulfilled or rejected those Lists get emptied so there's no need for the state check to ensure it's pending.
I found #15 (comment) doesn't work, because @readyPromise has handlers when .ready
is accessed and the state is "waiting" -- note that .ready
getter is defined as follows (by the pseudo code):
this.ready =
if locked then this@[[reader]]@[[released]].then(() => this.ready)
elif this.state === 'waiting' then this@[[readyPromise]].then(() => this.ready)
else this@[[readyPromise]]
How feasible do you think it is to force authors to manually manage their open connections and such? Very bad? I am pretty sure in Node.js you have to do so (and in most server-side runtimes), but I appreciate that the browser is different.
because @readyPromise has handlers when .ready is accessed and the state is "waiting"
I'd like to help figure out the fix but I don't quite understand the problem :-/. Why does having handlers when .ready is accessed and the state is "waiting" mean your original plan does not work?
How feasible do you think it is to force authors to manually manage their open connections and such? Very bad? I am pretty sure in Node.js you have to do so (and in most server-side runtimes), but I appreciate that the browser is different.
That's possible, and in fact, it is comfortable for me as an implementer, if authors accept it.
I think of a few reasons why we need this feature.
- Many other interfaces have this feature. In particular, XHR has this feature, so people might expect a similar feature to be in fetch API.
- We can save careless authors.
- Developers not interested in the response body may want this feature. For example:
fetch(url).then(res => res.headers)
andfetch(new Request(url, {method: 'POST', body: 'hello'})).then(() => log('done!'))
I don't know how strongly developers want this feature, though.
because @readyPromise has handlers when .ready is accessed and the state is "waiting"
I'd like to help figure out the fix but I don't quite understand the problem :-/. Why does having handlers when .ready is accessed and the state is "waiting" mean your original plan does not work?
I uploaded a WIP commit: https://github.com/yutakahirano/fetch-with-streams/tree/auto-close
It states that a stream is observable if s@[[readyPromise]]
has handlers. That said, if you have ever accessed s.ready
, s
is observable while s
is waiting even if you have never attached a handler to the returned promise because s.ready
getter implementation itself attaches a handler () => this.ready
to s@[[readyPromise]]
. That is not we expect.
So the possible options I can think of are here.
- Do not abort the connection automatically.
- It is the simplest way.
- Some developers might suffer.
- As many other interfaces have "Garbage Collection" section, we should add some explanation at least.
- Abort the connection when the write-end of the request and the read-end of the response are not reachable from the GC.
- Simpler than the next one.
- This may expose the GC timing through
.ready
and.closed
.
- Abort the connection when the request and the response gets unobservable.
- Currently I am failing to specify the rule.
- It is complex and may confuse developers.
Discussed with @tyoshino, I'm now inclined to accept Domenic's proposal (manual connection management). Here are reasons:
- It's simple and easy to understand.
- I am still interested in idea (3) at #15 (comment), but I can't fix the problem.
- We don't have to save careless developers (in other words, careless developers don't care about leaks).
- If many developers want an auto-close mechanism, we can revisit it then. There will be no compatibility issue.
If many developers want an auto-close mechanism, we can revisit it then. There will be no compatibility issue.
If you are convinced of this then that is very good. I wasn't 100% sure.
I think it might be hard to say that fetch() is nicer than XHR except that it can cause connections to stay open. But maybe it isn't a big deal because in most cases a developer will read the body to the end. The POST example you gave might be an exception. But then again most POST bodies might be small enough that the browser can just read it into memory and close the open connection anyway (without closing the stream itself).
I anticipate pushback from other parts of the implementer and standards community on this so we might have to figure out an auto-close story anyway, or at least assemble a good explanation of why we think it's OK.
I also think it might be OK to expose GC timing (your 2), because eventually there is a (somewhat controversial) hope to add weak references to JS, as long as the reference only changes between turns of the event loop. That would make the behavior no different from what any JS program could achieve. But again this is controversial. It might be easier to add later when/if weak references make it into JS, and rely on manual collection in the meantime.
Status Update: Streams API has changed. Now
- ReadableStream doesn't have
.ready
. - ReadableStream doesn't have
.closed
.
See whatwg/streams#297 for details.
As .ready
and .closed
are removed, now ReadableStream doesn't have the problem. But .closed
can be added again in the future.
I now think the manual management is feasible for readers. It means that a reader (got from Request.body or Response.body) will not be collected while it is active. I'm not certain about streams.
A: Manual management
Request.body and Response.body will not be collected while readable. When we are sure that .closed
will not come back again, we can remove the statement (i.e. They can be collected if not reachable from the root set).
B: No specification
Request.body and Response.body can be collected if not reachable from the root set. If .closed
comes back, the garbage collection timing will be revealed through handlers attached to it, but we don't care.
I would like to choose B because it is feasible for many users if .closed
won't come back. If .closed
comes back, we can state that we terminate fetch operation but don't call close()
or error()
so that the handlers are not called.
Agreed. The trick of not calling close() or error() is a nice one! But .closed
will probably not come back I think.
This should be very straightforward. Everything that would be able to track GC of the object from JavaScript must keep the object alive. And then you have a separate hook for "do this when GC'd" which Fetch will use to terminate.
@annevk I think it's debatable whether the better model is "if it's at all possible that the termination is unobservable to JS, then terminate" vs. "if the user has crossed a clear boundary line, then they no longer get auto-terminate-on-GC."
The former means e.g.
const reader = res.body.getReader();
reader.closed;
// can terminate
const reader = res.body.getReader();
reader.closed.then(onClosed);
// can terminate
const reader = res.body.getReader();
reader.closed.catch(onAbnormalTermination);
// cannot terminate
The distinction between these three cases is subtle, and it seems unfortunate to have the number of open connections to the server end up depending on the differences between them.
Whereas, the latter approach says: by calling res.body.getReader()
, you have explicitly opted in to manual resource management. (So all three of the above cases would not terminate.)
The intent of the current spec is the latter. I don't have a strong preference for it, but it makes sense. We noted that in non-web platforms, such as Node, these type of GC hooks do not exist, and resource management of this sort is entirely manual.
I don't see why we'd disallow optimizations that cannot be observed and are consistent with other platform objects.
I think it's debatable both whether termination is an optimization, and whether it cannot be observed. (Since the number of connections is indirectly observable on the client, or directly observable on the server.)
Concretely, what I'm worried about here is someone coding up a site which works well with 10 open connections, based on code which forgets to call reader.releaseLock()
but does e.g. reader.closed.then(onClosed)
. Later they add some new feature to the site that requires calling reader.closed.catch(onAbnormalTermination)
. Now their site has thousands of open connections, and their server starts crashing, and everything goes wrong, and it's not at all obvious why. If the rule was simply "the connection stays open until you release the reader," they'd find out early they were keeping the connections open.
@domenic, I'm all for "fast fail" to avoid subtle errors, but memory constraints are already shifting and hard to pin down. Even if we always leaked here it might work fine and then suddenly start getting OOM'd when the device is used in a slightly different way.
Also, encouraging leaks from one site negatively impacts other tabs running different origins. I think this is highly undesirable.
In general I think the UA should strive to be as memory efficient as it can given the constraints from the script.
But the great thing about different browsers is we can optimize in different ways. I just don't think the spec should re-define observability to rule out optimizations chrome doesn't want to implement.
I don't think saying "you need to release your resources; we won't do that automatically for you" is "encouraging leaks"...
But I see your point about letting UAs do different things. As long as we accept that loses some interop, it seems fine.
I think I've said my piece here and will leave it to @yutakahirano to chime in with his thoughts :)
I was also going to mention leaking these resources is extra bad in that it also leaks the file descriptor. I don't know about chrome, but I think this could potentially crash the gecko parent process if all fds are exhausted. That's extra double bad.
#25 (comment)
Actually, should we allow GC termination of a fetch in the case where the reader is not locked, but the data is possibly being written to the http cache? It seems the current proposal would terminate in this case and truncate the cache data.
I think so, and the cache data will not be truncated. Fetch operation can be terminated by various reasons including network errors and user abort, and cache should work well with them. GC termination should be treated in the same manner.
I think so, and the cache data will not be truncated. Fetch operation can be terminated by various reasons including network errors and user abort, and cache should work well with them. GC termination should be treated in the same manner.
Just to clarify, are you saying that if a fetch is terminated it should still be written to the http cache in full? Or are you saying it should be removed from the http cache? Thanks!
I'm not sure if it is specified. That happens when the body is extremely large even without streaming.
In the spec, there is a statement
If response is not a network error and request's cache mode is not "no-store", update response in the HTTP cache for request.
but it says nothing about the case when the update fails.
We should probably fix that. I wish there was a more object-based specification of the HTTP cache with algorithms that return error or success and some advice what to do under various conditions...
@annevk, the existing specs like WebSockets or XHR have Garbage Collection section, but they are written in an explicit fashion, like
An XMLHttpRequest object must not be garbage collected if its state is either opened with the send() flag set, headers received, or loading, and it has one or more event listeners registered whose type is one of readystatechange, progress, abort, error, load, timeout, and loadend.
Is it possible (or desirable) to specify it in a implicit fashion, as @wanderview said?
The UA may terminate a fetch that has a locked reader if and only if the following two conditions are true:
- The fetch has been suspended.
- The UA can guarantee the fetch termination cannot be observed through the stream object or one of its promises.
Is "suspended" defined? That language still doesn't tell me much about how GC depends on the network, but maybe that's okay? Not sure I'm the best reviewer for this.
I did not intend to accept the literal expression, but wanted to ask if it's ok to specify it in a more implicit way than XHR's.
The most implicit version doesn't have to mention at the Response object or its body.
The user agent may terminate a fetch with reason garbage collection if that termination cannot be observed through the promise returned by fetch function.
Do you think the behavior is well-defined? If so, I'm happy to write so (Many examples and notes are needed though). @domenic, "can be observed" is well-defined in the JS world?
Do you have any idea? To be clear, I'm happy with the current description - it's easy to understand and easy to implement. But I'm not opposing to loosen the rule so that an implementer can implement a more sophisticated mechanism.
Note that terminating fetch is not mandatory, i.e. a system which doesn't terminate a fetch operation with reason garbage collection is conformant. Similarly, a system compliant with the current description will be compliant with the loosened rule automatically.
Seems no one is interested in this topic...
Well, I think Mozilla is interested in allowing implementations to close whenever it cannot be observed.
Hm, I will create a PR containing something like #15 (comment).
Merged the PR.