erichlof/THREE.js-PathTracing-Renderer

Denoising

sweco-sekrsv opened this issue ยท 17 comments

Your project is fantastic, quite a achievement! I saw another project which is similar to yours, they talk about denoising here in this issue:

hoverinc/ray-tracing-renderer#2

I thought you might have experience in denoising techniques and want to add to the discussion. It would be cool to also see denoising ending up in your project.

@kristiansvenssonwsp
Hello, thank you!
That's a cool project - I was unaware of it, thanks for the link. Truth be told, denoising is the area where I have the least experience and knowledge. Over the last 4 years I have tried to educate myself about basic to advanced general ray tracing - which, as you know, covers a lot of fields: light transport, material brdfs, path tracing, monte carlo integration (and probability/statistics), ray intersection, geometry, BVH acceleration, gpu parallelism, etc, the list goes on. One of the subjects I haven't got around to yet is the art of denoising rendered images in real time. I'm not even sure how it's done to be honest, but if I had to take an educated guess, I would guess that before you write a final color to each pixel in the fragment shader, you would take a peek at the pixel's neighbors above, below, left and right, then somehow average the results, kind of like a bi-linear filter. But that is probably a gross over-simplification. I wonder how are fine edges handled? How are different materials handled if they are only specular and do not require any denoising vs. diffuse surfaces that require it?

I noticed in the discussion link you provided that you included some shadertoy simple denoising examples. Thanks for those, that will help me get started on understanding what goes into a denoiser. Hopefully I can learn from the actual shader source code and try some of it out on my own project. I'll definitely let you know if I get something working.

At NVidia, they are not only using denoisers, but somehow using machine learning, or AI, to assist with cleaning up the diffuse surface noise in real time. There's been a lot of research in image recognition AI and it's only getting better each month it seems. Exciting times!

dafhi commented

"AI" sure is interesting. Thanks for those links. Since many path tracers are progressive just for the fun of watching, I was thinking why not just denoise every n frames, progressively increasing the denoising frame gap. ZUH!

I wrote a basic PT (in BASIC, no less) .. only spheres. I have a very simplistic approach to problem solving in many of my ideas. Can't remember how I handled fireflies, but my russian roulette technique was unique.

Of course this project's caustics rendering and "firefly handling," if that's what one might call it, is on a completely different level. My favorite scenes are the geometry showcase and difficult lighting.

I have one thing to contribute, and maybe more as time goes on. In the programming forum I visit, there has been discussion of PRNGs. I invented a couple. My new one seems to favor monte carlo path traching.

a *= a
a xor= w
w += 1

@dafhi
Hi David, yes I think that the state-of-the-art denoisers actually do their work over multiple frames like you envisioned with the progressive rendering. The big difference with what they do and simple progressive rendering like I do is: I just present the 1 sample per pixel to the screen every frame from startup, that's why you see the initial noise when you start moving the camera. When the camera is still, the 1 sample per pixel is kept and then averaged with the new 1 sample per pixel, which becomes essentially '2 samples' per pixel, and so on and so on... I think the AI denoisers and other sophisticated temporal denoisers actually take the first 1 sample per pixel image as input but it's not presented to the screen right away. Some magic happens under the hood with the following 1 sample per pixel image and when it's done, the new combined, denoised image is actually presented to the screen. Of course all of this happens at a blazing 60 times a second. Not sure yet how it all works, I'm still looking into those links that Kristian provided above.

That's awesome that you wrote a path tracer in BASIC! I didn't think that something like that was even possible! Takes me back to my first programming experience, programming in BASIC on my beloved Commodore 64 back in 1983! Yes, the caustics and fireflies have been a 'thorn in my side' ever since this project started. Fireflies are a little more manageable than caustics though, you just have to have some tricks up your sleeve. The reason why those happen in a pure basic path tracer without direct lighting sampling is that a secondary GI diffuse surface has found a bright light by chance. Say your camera ray hits a diffuse surface, then it bounces from that to another diffuse surface, and then to another diffuse surface, and then from there it accidentally hits a light source. Well if you work your way back, that contributes a really bright pixel for that little point on the initial diffuse surface that the camera ray hit. And its neighbors are not so lucky, maybe they hit another secondary diffuse surface as well, but odds are that the ray won't find the light source after all that random diffuse bouncing, so it just keeps bouncing and attenuating the ray's contribution until it either dies out or escapes the scene, resulting in a darker pixel right around the super bright one that was 'lucky' (unlucky for us! ha). The only way to get around this is to implement direct light sampling, then handle fireflies by saying since I've already manually added the light contribution on every diffuse bounce, then if a diffuse surface accidentally hits a light source, the ray is terminated and the contribution is 0. This keeps everything much more uniform and you can control the amount of brightness and shadows in the scene much easier (and have faster convergence too!). The good news is that it has been mathematically proven even though if you 'cheat' and add direct light sampling, that your Monte Carlo path tracer is still consistent and totally unbiased. It's one of those amazing mathematical quirks with statistics and probability theory.

Thanks for the PRNG tidbits, I must admit that the subject of generting random numbers is way above my head. I have never really looked deeply into how that all works. I credited iq on ShaderToy for his amazing bit-shifting generator that runs really fast inside a shader on the GPU. I owe a lot to those 4 little magic lines of code (in terms of smooth randomness for the Monte Carlo bits of my project, which is so important), but I have no idea how they work! ;-D

@erichlof perhaps this blog can shed some more light on the subject. See, most denoising algo's are actually opensourced, for instance nvidia's SVGF implementation is opensourced in the Quake II repo.

Also, this video was mighty intersting to watch

@MaartenBreeedveld
Thank you for the links! I'm going to read the blog post right now. Also I have seen that video briefly before (I'm always looking for newly uploaded PathTracing videos on YouTube, ha ha). But I will take a closer look at it with the blog post fresh in my memory ;-)

Thanks again!
-Erich

@erichlof
I stumbled on an A-SVGF example on ShaderToy I can't say it's fully working, but it's looking promising!

I'm wondering how the 'Channels' from shadertoy would be implemented in THREE.js.

@MaartenBreeedveld

Ah yes I had seen this one before and had actually bookmarked it. Although it is impressive in quality, it is equally impressive in how dense the code is. His coding style, cryptic variable naming, and lack of white space is a huge barrier for me understanding what is going on. Obviously he is very knowledgeable (you could even say brilliant), but without meaningful code comments or more thoughtful structure, I would be at a loss if I just started poking around at variables and magic numbers. I wouldn't know if anything I was doing was changing or breaking anything.

That being said, I am going to go back and take a closer look at the original implementation - the one used for Quake II RTX. Here's an NVIDIA link. Also, here's an interesting link that uses much simpler scenes and geometry:
CUDA Github project where I might be able to better separate the denoising parts from the usual rendering parts.

About Shadertoy 'channels' and Three.js, the channels are actually like 'render targets' in Three.js that each have their own Three.js scene. All of them are ShaderMaterials in Three.js that get stretched across a screen-size quad (2 huge triangles). If you were somehow able to port my entire codebase to Shadertoy, there would be 2 'channel' tabs and a required 'Image' tab. The first channel would be the traditional main path tracing shader, the second channel would be a simple copy-entire-screen of the first channel's output to a large quad texture. The first channel would have a link to the second channel as input to blend with (blended image = second channel * 0.5 + first channel * 0.5. This creates what is sometimes referred to as a ping-pong buffer. The final 'image' tab would have a link to the first channel (which, recall is a blend of first channel + the second channel), then the 'image' tab divides the input blended image by the number of sample frames taken so far (final_Intensity = inputImage / sampleCount), applies tone mapping and gamma correction and then renders to its own full screen size quad just like the channels did. The user only sees the 'image' tab, never the channels - in my case the user sees the 'screenOutput scene' and never the 'pathTracing scene' or the 'screenCopy scene' as these ping-pong full-screen texture render targets contain huge unbounded floating-point linear color values that would oversaturate the monitor without the necessary tonemapping and gamma correction applied in the 'screenOutput' shader. Hope that clarifies things!

As always, thank you for the links and heads-up about these resources!

@erichlof
I must agree with you there, I'm pretty much a noob when it comes to shaders, but I've earned some stripes in programming.
The math on this shader looks very dense indeed. I was somewhat hoping it was less of an issue for you considering your knowledge about shaders.

The NVIDIA link looks good indeed.

Thank you very much on the explanation of the channels, super interesting!

@MaartenBreeedveld
No problem! I'm glad to try and explain - it forces me to make sure I understand it myself first! LoL

Yes it is unfortunate that I can't wade through his Shadertoy code as it is probably exactly what my renderer needs right now. I'm the type of person who goes back and reads introductory chapters about vector operations (like dot and cross products) because I want to make sure I understand why it is done a certain way, and often times having it explained by 2 or 3 different authors with different approaches/diagrams helps me to view it from a new perspective and hopefully solidifies my understanding of the topic further.

I'll openly admit to copying and pasting from StackOverflow and Shadertoy at times. But I never just drop it in and leave it - I play around with it, poking and prodding it (like a kid engineer with a toy car), most often breaking it, then trying to put it back together in a new way. Then I can confidently make it a part of my codebase because I know that if something goes wrong with it, I'll at least have somewhat of an understanding of how and why it works. For instance, it took me months to understand what a BVH is and how to create one and then how to traverse one in a shader. I started by copying a couple of C++ BVH builders out there in the wild, then going line by line, breaking things and poking around until I could build my own. It will most likely be the same for a denoiser. At this point I'm just trying to find a simpler one that I can play around with, maybe not even a full denoiser, but maybe just a basic spatial blur filter that I can drop into a shader and then improve upon, bit by bit.

You mentioned that you were newer to shaders. I think you'll like this YouTube channel . If you check out the older ones, he explains in a very clear way how shaders work and how to do simple effects and basic shapes with ray marching. His videos progress all the way to some pretty advanced concepts. But overall, his videos are understandable and really well done.

Also on the more advanced side of shaders, P_Malin of Shadertoy and glslSandbox has really amazing shaders, but even more inspiring for me personally is his clear code style, carefully thought-out variable names, and helpful comments here and there. Here is one of his more popular examples: link You can really tell what's going on and what each function's job is.

Enjoy!
-Erich

@erichlof
Awesome! I've stumbled into that youtube channel in the past, it's a great resource indeed!
I'm going to have to free up some time to properly learn shaders I guess!

Anyway, I will be following your progress closely ๐Ÿ‘!

Hi Erich,
I popped by to see how your project is going and I must say I'm super impressed with the denoiser!
Also, those NVIDIA guys didn't do it in one day. They were researching the matter for years as well :).

Awesome!

This YouTube video shows a comparison of the results of several denoising methods (SVGF, NFOR, ONND, MR-KP, BMFR, NBG):

https://youtu.be/9PVR1-GTt6g?si=crasv6MpmPudwiJV

5E2DEC58-66B1-40A8-BC5C-F72A0AD542DC

An interactive live results viewer to compare those methods is also available at the following URL (the linked page is HTTP instead of HTTPS but it should be fine):
https://github.com/xmeng525/RealTimeDenoisingNeuralBilateralGrid?tab=readme-ov-file

@giovanni-a Thank you for the links and video! That online viewer is a great tool - haven't seen many research papers use such an interactive tool for comparisons. It really helps to show the differences. Thanks again!

Thank you, @erichlof! Your work is fantastic, and I cannot wait to see what you will achieve next. I feel that the day when weโ€™ll have a path tracer that works in real-time, with minimal noise, and even on mobile, is getting closer. Keep up the great work!

I've been following your project on and off for a few years now, and came here to make an issue about denoising myself and get a conversation going. So glad it's here! :D I'm sure you're very busy and it can be quite an undertaking, but I think implementing some sort of TAA reprojection and denoiser would bring this project to another, another level. With such good frame rates even on mobile, I'd love to try and use your renderer in the wild on something but (almost) everything has pesky motion!

I might take a stab at it myself if I get started on something. This looks fairly* easy* to implement (maybe).
https://www.shadertoy.com/view/WdjcDd

Thanks for your awesome work @erichlof !

@jerzakm Thank you very much for the kind words and suggestion! And thank you for the great example - that will be very helpful!

Sorry for the late replies - for the last couple of months, I have been going down the rabbit hole of trying to efficiently raycast a torus (which is traditionally very difficult and expensive because it is a surface of degree 4, or quartic). Sometimes I feel like the comical mad scientist who is creating solutions, throwing them out in disgust, starting over, try something else, throw that code out, and finally... - I have found a solution! I will be releasing a demo soon that ray traces and renders a very close torus approximation, for the cost of ray tracing a sphere! Therefore, we can have as many torus shapes as we want in a scene, and it'll run smooth on every device - even on mobile!

About the TAA denoising, I'm so glad you entered the discussion, @jerzakm , because I've tried looking at the freely available code for the complete A-SVGF denoising solution a couple of times in the past, but I just couldn't wrap my head around it - even when there was a couple shadertoys that demonstrated this method. And as I previously mentioned in this thread, before I just drop someone else's solution into my project, I feel the need to really understand it, line by line. That way, if I add something else in the future that conflicts/crashes with the denoiser or causes artifacts, I can debug it and at least know where to start in order to find a solution. And in the past, since I couldn't wrap my head around how A-SVGF does its magic, I couldn't bring myself to just dump it in my codebase and start trying to hook it up to everything else.

However, your helpful shadertoy example that you linked to is very tight and focused, and therefore the amount of shader code is much more manageable for me to pour over. I'm glad you suggested the exact technique by name, TAA, because that is what the example focuses on for the denoising. If I'm not mistaken, A-SVGF uses a very similar component - TAA, but it is one of several components (in addition to the edge detection component, the screen-spatial filtering component, etc.) and is wrapped up and entangled in the A-SVGF source code, making it more difficult for me to tell what is what.

But now I feel I can study this small component (TAA with re-projection) of the overall larger denoising solution, in order to understand it more fully. And hopefully soon, I can start experimenting (back to my 'laboratory' ๐Ÿ˜†) with hooking up TAA to my own renderer and customizing it.

But don't let this stop you from trying it on your own! If you do, and you are able to get it working, please share your findings on this thread here, and maybe even share a GitHub Gist, in order to see how you hooked it up. Like I mentioned, I'm working on the torus raycasting at the moment, but soon when that's finished (or good enough to be satisfied, ha), then I will go down the TAA rabbit hole! Several people have asked for a more sophisticated denoising scheme, and I really appreciate the look of the A-SVGF approach - It seems to be the standard that most ray tracers turn to (if one is not using neural networks and AI to denoise the final image). So I think that if I could get a small but essential component of that solution working, it would benefit our renderer a lot! ๐Ÿ˜Š

-Erich