flutter/flutter

Add `Picture.toImageSync` that returns `Image`

Closed this issue · 23 comments

Currently Flutter has Picture.toImage returning Future<Image> (api docs). The API docs say "Although the image is returned synchronously, the picture is actually rasterized the first time the image is drawn and then cached". This makes me think that wrapping the result in a Future is redundant. Can we have a Picture.toImageSync that returns the image truly synchronously?

The use-case is procedurally generated atlasses that can be rendered using Canvas.drawAtlas. For example, to render particle effects, such as those generated by https://felixblaschke.github.io/liquid-studio, it is more efficient to use an atlas rather than many drawCircles with blurring mask filter. However, to generate the atlas one needs to call Picture.toImage and miss at least one frame. Picture.toImageSync would allow us to generate an atlas synchronously (and cheaply, since the atlas contains just one particle) and in the same frame render thousands of particles using drawAtlas.

Although the image is returned synchronously, the picture is actually rasterized the first time the image is drawn and then cached.

That's not true. The image is sent for decompression immediately (and asynchronously) and won't be rasterized till it is included in a scene. There is no caching at the dart_ui layer and the image will be collected when the Dart UI image reference is collected or explicitly disposed.

We should fix that documentation first and then decide how this affect the rest of feature request.

We need to be careful about what we mean by "image". As far as the framework-side is concerned, for all intents and purposes you have an image if you have a reference to the ui.Image object. What happens behind-the-scenes is an implementation detail. It could be an important detail (e.g. it visibly affects performance) and therefore is worth documenting accurately. If the framework needs access to the pixels themselves, there's a separate (also asynchronous) API for that: Image.toByteData.

This feature request is only about handing a reference to ui.Image to the framework. This request has no opinion about when it's actually rasterized, or how it's cached. The only requirement is that the reference can be synchronously used for drawing into a canvas. The rest can remain an implementation detail. This is feasible on the web. Can we do this on mobile/desktop?

ui.Image is and always has been a reference to a texture that is fully realized and resident in device memory. You may add any abstractions you want on top of this to make it fit a different mental model.

So, on the question of whether this can be done on mobile/desktop? Probably, but I don't see the point. ui.Image is the implementation detail. The framework can choose to add abstractions on top of it to fit the model you described just fine. There is no need to redefine what this super simple low-level utility means now. As you said, we need to be careful about what we mean when we say "image". I completely agree, but ui.Image means something specific however.

It sounds like we won't be able to implement this.

@chinmaygarde

ui.Image is and always has been a reference to a texture that is fully realized and resident in device memory. You may add any abstractions you want on top of this to make it fit a different mental model.

So, on the question of whether this can be done on mobile/desktop? Probably, but I don't see the point. ui.Image is the implementation detail. The framework can choose to add abstractions on top of it to fit the model you described just fine. There is no need to redefine what this super simple low-level utility means now. As you said, we need to be careful about what we mean when we say "image". I completely agree, but ui.Image means something specific however.

Can you please provide an example where one could use an abstraction to draw a picture and then use it as a source for an atlas without missing a frame?

@zanderso

It sounds like we won't be able to implement this.

I'm not yet sure that's the case. This is trivial to implement on the web.

Perhaps my request is more opinionated than it should be. The goal is to be able to render dynamically generated pictures as efficiently as drawAtlas (you can draw 100,000 sprites at 60 FPS; drawPicture can't do that). My original request makes an assumption that the right way to do this is to create a Picture, then turn it into an Image, then call drawAtlas passing it the image. This approach demonstrably works, except that it always misses the first frame due to Picture.toImage being async. But perhaps there's a way to achieve the same thing without having to provide a synchronous Picture.toImage.

Related: #37180.

I phrased that issue way stricter first (drawing raw pixel data synchronously), but I actually sync that making decoding Image objects / rasterizing Pictures synchronously covers it way better (hence rephrased) since it does allow drawing raw pixel data perfectly and enables many other applications (like passing to drawAtlas).

Related: #40990 - some of these issues could be consolidated, but I agree there's a solid motivation to be able to use a Picture synchronously.

I don't really see how we'd ever get acceptable performance from a sync method here.

Today when Flutter rasterizes a layer tree, it's compositing all of it at once to show on screen. This doesn't require any readback.

An API like this would require reading back from a surface during frame workload. That will be slow on mobile GPUs. Users will call this API multiple times per frame, or for images that are too big/complex to process in a frame.

I think we should not do this. This really is not equivalent to drawAtlas because in the draw atlas case you already have the pixels ready to go in a GPU resident texture (usually). If we fix #13498 you'll be able to asynchronously get a GPU resident rasterization of a Picture, and then use drawAtlas with that to your hearts content - and still show other UI while you're waiting for that rasterization to occur (instead of being blocked).

It may be worth noting that we have other customers requesting this feature (/cc @gw280 @zanderso who are familiar with this request). I've made similar comments on other documents about this request. IMHO this API would be a huge footgun.

I've been thinking about this some more and I think we could make this sync without blocking the UI thread.

I have a rough patch that seems to work. The main thing I'm trying to get my head around right now is what to do on a failure.

For example, if you try to call this API in the background on iOS, you won't get an image out of it. But if we make a sync API for this, we'll never have a good way to notify the UI thread about this failing on the raster thread.

There are some other failure modes that would have to be tackled. Here are the ones I can think of:

  • There is no context available when you request the GPU image, so we can't make one. Do we just try again when one is available?
  • The context goes away after you make the GPU image, taking the texture with it. What do we do? Do a host transfer and re-upload for you?
  • The image is too big to fit into GPU memory (do we try to tile it for you, or just fail and make you do it?)
  • You have simply run out of memory.

@dnfield The dart:ui APIs should aim to be primitives. So implementing automatic error recovery there doesn't seem like the right thing to do.

What happens when there's an error in which a platform view or other texture from the platform fails to render? Is this an analogous situation? Maybe just log it? Another option would be to put an asynchronous error on an out-of-band Stream or Future.

If we make this API completely synchronous, we can get into a bad situation where an error happens on the raster thread and we don't have a good way to notify code on the UI thread about it.

Right now we definitely have bugs on at least iOS where you can "successfully" create an image from a picture but it will never be renderable. We used to have a similar bug on Android that's been fixed. I'd like to avoid that with this API.

Having some kind of out of band stream or future might be interesting, but I'm not sure how much better that would be. Imagine for example:

final image = picture.toImageSync(width, height, onError: (error) { /* what do? */ });
canvas.drawImage(image, Paint());

That error callback will never fire before the call to canvas.drawImage, and the image you're trying to draw will never draw anything at all.

I suppose some higher level package could correlate that error back to the now poisoned canvas/picture and invalidate it...

And right now we don't really have good failure modes for PlatformViews failing to render, AFAIK - but it's a major pain point for customers and we should avoid adding more sharp edges like that.

It might help to take a step back and think about whether the errors that could be generated are actionable by the developer.

Right now we definitely have bugs on at least iOS where you can "successfully" create an image from a picture but it will never be renderable. We used to have a similar bug on Android that's been fixed. I'd like to avoid that with this API.

Ones like this are engine bugs, and so are unlikely to be actionable by the developer, and so don't need a Dart API to surface them. Spitting out error messages into the log would be sufficient. Another option would be to inject an asynchronous Dart exception into the root Zone.

If the error is actionable, then yeah, the API would be synchronous on the happy path, but would have to have some async component for handling the errors (and displaying something different on subsequent frames?) Maybe toImageSync could explicitly take some data to use if a fallback is needed? Not sure.

In any case, I feel like we're a bit off in the weeds here, and if this API can provide a speed boost, then we shouldn't let it get stuck on this.

It might help to talk over a gvc more. My concern is that this api would have a lot of surprising rough edges for developers. For example if they requested an image that's bigger than the GPU can handle, (which may vary by GPU), or requested it when the GPU wasn't available, or tried to use it after the GPU context went away (and so it's no longer available).

In all of these cases, we could not throw a synchronous error. And if the engine has to just make it work then it's a lot of complexity.

Zach and I talked offline and agreed there should be some kind of benchmark/motivating case to determine how much performance we could get out of something like this. If we can show a really good case for it, then it would make sense to move forward with figuring out how to deal with the API. If not, then these API shortcomings don't matter.

Zach and I talked offline and agreed there should be some kind of benchmark/motivating case to determine how much performance we could get out of something like this. If we can show a really good case for it, then it would make sense to move forward with figuring out how to deal with the API. If not, then these API shortcomings don't matter.

Btw do you still need such a benchmark? My use case can be solved (as one of multiple approaches) via this method.

If you'd like to add one that'd be great

This thread has been automatically locked since there has not been any recent activity after it was closed. If you are still experiencing a similar issue, please open a new bug, including the output of flutter doctor -v and a minimal reproduction of the issue.