`async` support
Opened this issue · 12 comments
Objective-C uses "completion handlers" for concurrency.
Swift automatically translates these to async
functions, we could consider doing something similar after #264.
Also would be a good step forwards for async
in winit
.
See also blockr
's async
support, and cidre
's async
support.
Would probably also be good to look at what is actually required on the runtime-side of this? Do we need to do stuff with NSRunLoop
?
Instead of async fn method()
, we could use fn method() -> CompletionHandler<()>
, and then impl IntoFuture for CompletionHandler
Would probably also be good to look at what is actually required on the runtime-side of this? Do we need to do stuff with NSRunLoop?
There is actually a way to make NSRunLoop
fully async, since:
NSRunLoop
is a wrapper aroundCFRunLoop
.CFRunLoop
is a wrapper around Grand Central Dispatch.- Grand Central Dispatch uses a Mach port to coordinate itself.
- Mach ports can be registered in
kqueue
.
So if we can register the global GCD Mach port into a kqueue
, we can then put that into, say, a async-io::Async
and then use the readable()
function to tell when it's available. The main downside of this approach is that steps 2 and 3 is that gaining access to the GCD Mach port requires access to unstable OS APIs (namely, _dispatch_get_main_queue_port_4CF()
), but it's not like the Rust async stack isn't already built on top of unstable OS APIs anyways.
Interesting to know, thanks!
I don't think I would have that much against using unstable APIs like _dispatch_get_main_queue_port_4CF
, but would ideally really like to avoid it (since it can have consequences for submitting apps to the App Store, as it is not public).
Related Kotlin issue about their unfinished async support: https://youtrack.jetbrains.com/issue/KT-47610
A necessary prerequisite is #168, since we need some way to tell that the completion handlers will only be run once.
To take the example from the linked Swift proposal:
- (void)fetchShareParticipantWithUserRecordID:(CKRecordID *)userRecordID
completionHandler:(void (^)(CKShareParticipant * _Nullable, NSError * _Nullable))completionHandler;
And what that would (ideally) be in Rust, pre-async translation:
fn method_name_bikeshed(
&self,
userRecordID: &CKRecordID,
completionHandler: BlockOnceSendSync<(Option<&CKShareParticipant>, Option<&NSError>), ()>,
);
It would probably be prudent to first convert the Objective-C style error into the usual Rust error:
fn method_name_bikeshed(
&self,
userRecordID: &CKRecordID,
completionHandler: BlockOnceSendSync<(Result<&CKShareParticipant, &NSError>,), ()>,
);
Note that this has a different ABI, but I might be able to handle such things directly in block2
.
Now, we could subsequently convert it to:
fn method_name_bikeshed(
&self,
userRecordID: &CKRecordID,
) -> BikeshedHelper<Result<&CKShareParticipant, &NSError>, impl FnOnce(...)>;
struct BikeshedHelper<T, F: FnOnce(BlockOnceSendSync<(T,), ()>)> { ... }
impl BikeshedHelper<T, F> {
pub fn run_with_block(self, block: BlockOnceSendSync<(T,), ()>) { ... }
}
impl IntoFuture for BikeshedHelper<T, F> {
type Output = T; // TODO: Convert &MyClass to Id<MyClass> here
type Future = BlockFuture<T>;
}
struct BlockFuture<T> { ... }
Immediately we see a few issues:
- Ideally we'd be able to choose to do the call manually using blocks, instead of the
async
functionality. This is attempted made possible by therun_with_block
method, but that does complicate the signature in a way I'm not sure will be possible to support easily. Alternatively we will have to emit both the async and the non-asynv version, which also sounds bad. - We need some way to generically convert reference arguments given in the block, to retained arguments that are safe to return to the surrounding context.
- There is a performance cost to this, since we have to do said retain.
Note: I think it makes sense to make separate adapters depending on whether the block can receive an error or not (one parameter is Option<&NSError>
(though not necessarily the last?)), since the Swift design document seems to treat them differently.
And we also need to distinguish between blocks with one parameter vs. multiple parameters (the former should be a simple output, while the latter should return a tuple).
Linking clang's _Nullable_result
attribute, which is useful for determining whether the async function should return an Option
or not.
An alternative solution for integrating with the rest of the async
ecosystem might be to implement an alternative mio::Source
(related: tokio-rs/mio#1500)?
Though it doesn't seem like async-io
provides a similar extension mechanism, likely for performance reasons?
(I can tell I'm wayy too inexperienced with the async
ecosystem to even begin knowing what's up and what's down).
async-io
exposes mechanisms for accessing other kqueue
filters, like processes exiting and signals firing. Granted, so far what we expose is a pretty narrow subset of what's possible with kqueue
.
However it wouldn't be too difficult to expose, say, EVFILT_MACHPORT
, in async-io
. It's just a few more types and another item in an enum
.
The issue is that we hit the limits of what can be registered into kqueue
pretty quickly. As above some mechanisms can be translated into Mach ports, but not all, and not in a stable way.
The current available async
runtimes are built around networking, which means kqueue
for macOS. An interesting idea I haven't had the time to implement yet is an async
runtime that uses the GUI primitives as a base. So it would use NSRunLoop
on macOS, MsgWaitForMultipleObjects
on Windows, ALooper
on Android, et cetera. But as the current GUI systems seems resistant to the idea of adopting async
I haven't pursued it yet.
Spent some time on this today, from my understanding there's effectively two things that need to be done:
- Allow converting completion handlers and delegate callbacks to futures.
- Integrating the
async
ecosystem with run loops, as talked about above.
I still only have a vague notion on how to progress on the second item, but I think the first one is tractable.
Effectively, we need a wrapper type that implements Future
and can store the Waker
that executors pass, and which will, when the callback is invoked, store (and retain
) the callback parameters and .wake()
the waker so that the future can take them out.
Basic usage of such a wrapper, to illustrate the idea:
let continuation = Continuation::new();
let completion_handler = block2::RcBlock::new(|x, y, z| {
continuation.resume((x, y, z));
});
my_obj.my_method(my_arg, &completion_handler);
continuation.await
Swift calls this a "continuation", and @drewcrawford has recently extracted the implementation from blockr
into the crate continue
to make it possible in Rust. Also discussed on the Smol Matrix, they recommend the oneshot
crate.
I suspect I'll still want a helper wrapper in block2
though:
- For convenience.
- For easier
retain
andNSError
handling. - To optimize the
Arc
into the block itself. - To have a non-thread safe version (since main thread only code is common in Apple GUI code).
But for the delegate callback case I'll probably document continue
as the recommended approach.