Not an issue just wondering how to do something
Closed this issue · 3 comments
So I have a compute shader based renderer that uses OnRenderImage for rendering, and uses Camera.scaledPixelWidth and Camera.scaledPixelHeight for getting the screen width/height used to create the rendertextures
What do I change the width and height variables to to make it work with this? I got it working through hardcoding the resolution but things were funky(I could set it to any quality level but with my renderer having the same hardcoded resolution and it would work??), but obviously I dont want to hardcode the resolution
Also with using the onrenderimage, what should the order be there in the inspector of the scripts? or should I use some other way to call for rendering?
Thanks!!
Nvm on the weird stuff, found out what was causing that, but I still dont know what I should feed my own thing for screenwidth/height
Hi,
The fact that you're using Camera.scaledPixelWidth and Camera.scaledPixelHeight tells me that you're making use of Unity's dynamic resolution scaling feature. While that can work in conjunction with FSR2, FSR2 itself does not make use of Unity's dynamic resolution scaling to accomplish its upscaling. First of all because Unity does not allow any control over the scaling process (which is where FSR2 would have to inject) and more importantly, because dynamic resolution scaling is not implemented for every graphics API (DirectX 11 being the biggest problem there).
The example BiRP and PPV2 integrations in the GitHub project show how to perform FSR2 upscaling together with dynamic resolution scaling. What's important to keep in mind is that there are three different resolution values to keep track of:
- Display size. This is the final output resolution, the one that FSR2 targets for its upscaling. This is typically your monitor's native resolution.
- Max render size. The highest possible internal rendering resolution, determined by the FSR2 quality mode and the display size. This is a fixed size set when creating the FSR2 context object.
- Scaled render size. This is the internal rendering resolution for the current frame, determined by multiplying the max render size with Unity's dynamic resolution scale factor. This is passed to FSR2 through the RenderSize parameter in the DispatchDescription object.
Again you can check the reference integrations in the GitHub project to see how these different resolution values are handled. Particularly look at the GetScaledRenderSize method, which also checks whether dynamic resolution is supported and enabled in Unity.
As for the order of the scripts in the inspector, you'll want FSR2 to be the last OnRenderImage script on your camera. It has to execute last, otherwise any following scripts will get a mismatched set of color and depth buffers and Unity starts complaining. However, the logic that sets the camera's pixel width and height to prepare for upscaling, has to run first before any OnRenderImage scripts that make use of the camera's size parameters. This is why the Fsr2ImageEffectHelper script exists. You can put it on the camera and move it all the way to the top of the scripts list.
This whole setup is a bit convoluted, but it's a result of how OnRenderImage works in Unity and restrictions with regards to setting the execution order. Not to mention that the implementation of upscaling in BiRP is a bit of a hack. Also, OnRenderImage itself is seen as a somewhat outdated and obsolete way of executing post-processing scripts. The preferred way these days is to use camera events and binding command buffers to them. It's how the PPV2 package works, for example.
Thanks! I did it get it working, still thinking about whether or not to actually officially integrate it, probably will eventually but we will see
Thank you!