immersive-web/layers

Support mipmapping in composition layers

cabanier opened this issue · 10 comments

We currently don't support mipmapping.
It seems that it would be useful for experiences that want to display high quality content.

I don't think it's useful for projection layers because they are always shown at full pixel resolution.
Maybe it also doesn't make sense for cube and equirect layers.

@toji What do you think?

/agenda Should we have mipmapping in WebXR Layers?

toji commented

OpenXR swap chains have a mipCount, and it seems like a good idea for WebXR too. I agree that it doesn't make sense for projection layers.

I'm not sure about cube/equirect. It seems like most of the time those types of layers will be shown with magnification filtering, but in a case where the resolution of the layer does exceed the resolution of the screen mipmapping could still help. This may be especially true of foveated displays where the periphery is lower resolution. As such, I think we should allow it. That also means we can put any mipmapping flags on the XRLayerInit for simplicity.

Digging in a little deeper: I think we should probably support it by allowing a mipCount to be specified just like OpenXR, rather than trying to be clever about how we allocate it. Likely default to 1. We should also probably specify that developers are responsible for populating the mip chain if they ask for one to be allocated, rather than trying to do it automatically. (That's the only way it could work for compressed textures.)

I agree.
I will make a PR and we can discuss it next week. If everyone agrees, I will merge it.

I've asked internally and was told that mipmapping also makes sense for projection layers. I will do some more research.

toji commented

Oh, interesting! I'd be curious to learn more about why.

Oh, interesting! I'd be curious to learn more about why.

This is the information I got:

On PC, projection layers are rendered pretty significantly above the screen resolution so that the middle of the field of view is at peak pixel resolution and the compositor sampling will over-sample the edges of the field of view (which due to rectilinear projection are over peak pixel resolution). To mitigate this, we generate mipmaps on the projection layer.

On mobile, this isn't presently useful due to the very high display resolutions and the low projection layer resolutions (limited by GPU compute capabilities), but it would if it had higher performing GPUs.

So it seems that we should provide the option that the runtime honors the request for mipmapping.
Maybe we can provide the option in the init structure to create a projection layer with a mip level and then have a parameter on the projection layer to show if it has mips.

toji commented

To mitigate this, we generate mipmaps on the projection layer.

That sounds like something that is happening internally and automatically, which might be the right thing to do in that case? And projection layers will generally be different in that you'll probably never have a compressed projection layer and you're expected to update them ~per frame. Given that different runtimes will have different needs here I feel like it's something that we should just leave to the implementation if needed. (I don't want people on a Quest wasting cycles manually generating a full mip chain for their projection layers because they saw that it helped with their desktop headset.)

True. To make sure we're in agreement, what you're saying is:

UAs are allowed to create projection layers that are larger than the eye buffers. They are then also allowed to create a mip chain under the hood. This means that we don't have to expose the mip level on projection layers.

I checked internally and mips are always autogenerated.

toji commented

Yes, that sounds good to me!