simco50/simoncoenen.com

DOOM Eternal - Graphics Study

utterances-bot opened this issue ยท 11 comments

DOOM Eternal - Graphics Study

A graphics study of Doom Eternal

https://www.simoncoenen.com/blog/programming/graphics/DoomEternalStudy.html

Great read mate! TY!

As always, just wonderful insights! Thank you very much for the study.

I ownder though how can Screen Space Reflections be done during fragment shading of the meshes, if the color buffer is not ready yet? They have depth buffer ready, so they can trace rays, but how do they resolve the traced color?

As always, just wonderful insights! Thank you very much for the study.

I ownder though how can Screen Space Reflections be done during fragment shading of the meshes, if the color buffer is not ready yet? They have depth buffer ready, so they can trace rays, but how do they resolve the traced color?

You're right, at that time the color buffer is not available.
That's why the color buffer from the previous frame is reprojected and sampled.

Really thanks for your great analysis of the rendering pipeline for Doom eternal, which is actually a new title. So we can see the newest ideas of id Software about rendering pipeline design.
I feel a little curious about how to capture the rendering steps. I also tried RenderDoc before. But when I captured the Doom Eternal, the RenderDoc reported errors and could not capture the data.
Do you also use RenderDoc, or some otherway like a modified version of ReShade?

Fantastic article, thank you for your insight. I am wondering how they compute screen space ambient occlusion without writing normals to a texture earlier in the frame. How can they construct samples around the hemisphere without normals? Are the normals inferred from the depth buffer?

How can they construct samples around the hemisphere without normals? Are the normals inferred from the depth buffer?

Thanks.
The samples are indeed determined by reconstructing the surface normal from the depth buffer.

Greate job! I've learned a lot from your article. Thanks for your sharing. May I ask how do they denoise the screen space reflection results, if it was done during the forward shading phase but not in a postprocess?

Greate job! I've learned a lot from your article. Thanks for your sharing. May I ask how do they denoise the screen space reflection results, if it was done during the forward shading phase but not in a postprocess?

It's been a while since I've looked but I don't remember any denoising happening specifically on SSR. The SSR implementation when I looked (long before RT reflections were introduced) happened directly in the forward shading pass and always did a perfect single-ray reflection not following the brdf of the surface. So most of the noise is introduced by the possibly low amount of raymarching steps which I didn't see any resolution for except for the global TAA resolve at the end of the frame.

Greate job! I've learned a lot from your article. Thanks for your sharing. May I ask how do they denoise the screen space reflection results, if it was done during the forward shading phase but not in a postprocess?

It's been a while since I've looked but I don't remember any denoising happening specifically on SSR. The SSR implementation when I looked (long before RT reflections were introduced) happened directly in the forward shading pass and always did a perfect single-ray reflection not following the brdf of the surface. So most of the noise is introduced by the possibly low amount of raymarching steps which I didn't see any resolution for except for the global TAA resolve at the end of the frame.

Thanks for your reply! After a lot of thinking, I'm trying to add a temporal filter for the SSR pixels in my forward shading pass, and they can follow the brdf of the surface, too. It seems working when the camera is not moving. All I need to do now is calculate a uv offset according to the ssr hit depth and prev viewproj matrix when sampling prev frame image.
And thanks again!

omd24 commented

Thanks for the great article. One general question, you mentioned that "With Id Tech 7, the engine has moved away from OpenGL and is entirely built with a Vulkan backend", so what about directx and playstation api, for example, doesn't xbox only support directx? or vulkan can also be used for that?

Thanks for the great article. One general question, you mentioned that "With Id Tech 7, the engine has moved away from OpenGL and is entirely built with a Vulkan backend", so what about directx and playstation api, for example, doesn't xbox only support directx? or vulkan can also be used for that?

By that I mean the Windows platform. I can't speak for the console platforms.