erichlof/THREE.js-PathTracing-Renderer

Support for different refractive index, metalness and roughness in Specular property

nayan-dhabarde opened this issue · 12 comments

First of all great work! I am trying to modify the glsl file to add support for metalness, roughness and refractive index. I was able to add roughness to the specular property by using mix function. How ever I don't have any clue on how to change the refractive index and metalness property for a specific material. It could be great if you could add those.

Hi @ndhabrde11
Thanks! Yes being able to change those properties in a clear, API-like manner has been on my never-ending TODO list (ha). You were correct to use the mix function to linearly interpolate between the perfect reflection vector (like a smooth mirror) and a rough specular surface (like scratched or damaged metal). However, just one suggestion (and you may have already done this): instead of using the rand(seed) function to randomly offset the reflection vector based upon roughness, it is better to use the randomDirectionInHemisphere(nl, seed) function to pre-calculate the rough randomized reflection vector first. Then just 'mix' between the perfect reflection vector and the rough one. The reason for this distinction is that the randomDirectionInHemisphere(nl, seed) produces a more consistent predictable pseudo-random vector that is inline with the surface normal (nl), than just randomly offsetting the original vector as I have done on some of my demos in the past. The visual result should be less noise on the rough surfaces. On some of the demos like the Bi-Directional Difficult Lighting demo with the 3 teapots, I used this better technique on the SPEC surface handling, so you can use that as an example if you need to.

As for specifying index of refraction ( IoR ) for refracting surfaces like glass, water, gems, etc, it should be a simple matter of setting the 'nt' value inside the shader's handling of REFR surfaces. Here's how it currently is:
if (intersec.type == REFR) // Ideal dielectric REFRACTION
{
nc = 1.0; // IOR of Air
nt = 1.5; // IOR of common Glass
Re = calcFresnelReflectance(n, nl, r.direction, nc, nt, tdir);
Tr = 1.0 - Re;
....
}

Here 'nt' is passed into the calcFresnelReflectance() function which determines how much the ray will transmit through the surface vs. how much the ray will reflect off of the surface. Re is how much it will reflect, and then I simply do 1.0 - Re to find how much it will transmit. So instead of hard coding 1.5 for the value of nt for glass as I have done, just set it to the desired IoR of an arbitrary surface, like:
nt = intersec.IoR
You can just add an 'IoR' (of type float) field to the 'intersec' structure declared at the top of all my shaders. Also add it to all the shapes types that you will need, like sphere, quad, box, etc. Then in the void SetupScene(void) function at the bottom of most shaders, you add the IoR (a float type value) to the scene objects definitions, like:
spheres[3] = Sphere( 33.0, vec3(290.0, 189.0, -435.0), z, vec3(1,1,1), 1.5, REFR);// Glass Sphere
or:
spheres[3] = Sphere( 33.0, vec3(290.0, 189.0, -435.0), z, vec3(1,1,1), 1.33, REFR);// 'Watery' Sphere
or
spheres[3] = Sphere( 33.0, vec3(290.0, 189.0, -435.0), z, vec3(1,1,1), 2.4, REFR);// Diamond-like Sphere

In the SceneIntersect function, the closest ray intersection always wins and overwrites the previous closest intersection, so you just set the field if it happens to be the closest:
if (d < t)
{
t = d;
intersec.normal = normalize((r.origin + r.direction * t) - spheres[3].position);
intersec.emission = spheres[3].emission;
intersec.color = spheres[3].color;
intersec.IoR = spheres[3].IoR;
intersec.type = spheres[3].type;
}

You can find plenty of tables of various IoR for different surfaces on the internet. The most common are 1.5 for glass and 1.33 for water, which is why I haven't got around to providing an API-like manner for specifying these, I just hard-coded them into the demos. One more thing, this assumes interactions between refractive surfaces and Air, like a pool of water with Air on top, or a glass sphere filled with Air, sitting in a room also filled with Air. Notice the 1.0 nc value for Air in the above function did not change. Now if you have a glass filled with water, sitting in a room filled with Air, you need to be more thoughtful in how you handle the possible interactions between glass, liquid, and air, and also more bounces will be needed in the bounces loop. The 'nc' value will also have to change arbitrarily, which complicates matters. I was just thinking the other day that I will do a demo of a glass of iced tea sitting on a table with a straw in it to show the refraction bending effect. This will require that I provide an API-like interface for specifying IoR. But if you don't want to wait around until I get that demo working, you have the necessary ingredients above to get you started hopefully.

Metalness is a different problem. It mainly boils down to specifying the cutoff between reflective (metal) and diffuse (non-metal) and then just handling as normal inside the bounces loop:
if (intersec.type == SPEC) // Ideal SPECULAR reflection
{
... reflect
}
if (intersec.type == DIFF) // Ideal DIFFUSE reflection
{
... choose random cosWeighted direction vector
}

Now in nature there is no such thing as partly metal: it either is metal, or it isn't. So again, if you just add a metalness property to the intersec structure (of type float), then you can easily specify the metalness inside the SceneIntersect function where the closest surface wins and whatever the metalness texture value is (if you're using PBR textures), or whatever the surface property you hard-code into that shape, will be the value for that particular ray intersection.
Then you fill out the field inside the SceneIntersect function each time a new closest ray intersection distance is found and overwrites the previous closest distance:
if (d < t)
{
t = d;
intersec.normal = normalize((r.origin + r.direction * t) - spheres[3].position);
intersec.emission = spheres[3].emission;
intersec.color = spheres[3].color;
intersec.roughness = spheres[3].roughness;
intersec.type = spheres[3].metalness > 0.0 ? SPEC : DIFF;
}

You can see some of this in action in my Animated BVH model demo which loads the damaged helmet model. This model goes back and forth between SPEC(metal) and COAT (non-metal diffuse-like surface with a clearCoat on top ), although I just wanted to get the PBR materials loading correctly, I wasn't thinking about a clear interface - so the bounces loop is a little messy, but it does what you are wanting, so you can see how I initially did it.

intersec.type = COAT; // initially set everything to non-metal
metallicRoughness = texture(tMetallicRoughnessMap, intersec.uv).rgb; // PBR texture read
if (metallicRoughness.b > 0.0) // ' .b' is metalness component of texture
intersec.type = SPEC; // overwrite and set to metal

Thanks for the suggestion - it has given me new determination now to create a clean interface for future applications. If you have any other questions, feel free to ask! :)
-Erich

Thank you for such a detailed description and solution. Below are my comments:

  1. About the rand(seed), yes I did noticed the noise. I will try the other one.
  2. About the refractive index, I already knew that it is there in the REFR type. What I was wondering is, what will it take to add it to the specular property, would I need something in the else condition::
    if( rand(seed) < Re )
    {
    } else {
    here?
    }
    The reason I asked this is because I went to this link:
    https://www.chaosgroup.com/blog/understanding-metalness

You can see different types of metals have different refractive index (which you already may know).

Also, I was wondering can I make it appear as metal without a metallicroughness map or is it already doing that with the Specular property because it looks like a metal to me with an increase in roughness. If it is already doing that, I would like to change its refractive index, calculate fresnel reflectance and apply to it, in order to achieve different metals like aluminium, gold, silver.

If you check out three.js PBR (https://threejs.org/docs/#api/en/materials/MeshPhysicalMaterial) material it does not need a map. Do you think it is something possible?

  1. Is it a lot of work to add support for bump textures?

BTW, thanks to the GLTF sample you had I was able to load a three.js JSON in it.
I apologies if something of the above does not makes sense. Shader is something new to me, I have experience in using three.js

@ndhabrde11
Ah, I understand a little better what you were wanting now. Yes, you can just add the
Re = calcFresnelReflectance(n, nl, r.direction, nc, nt, tdir);
to the SPEC (which is basically all metals) property handling inside the bounces loop.

However, as you'll see below, it doesn't make much sense when the reflectance function says "transmit the ray into the metal"

All of my current demos just handle SPEC with a simple mirror reflection of the ray, that may or may not be randomized due to roughness, if present. I don't plan on adding refractive property to my metals, but basically three things would need to happen if you wanted to add a refractive check to metal surface handling:
if (intersec.type == SPEC) // metals
{

  1. multiply the current ray's 'mask' (color) by the color of the metal, because by their very nature, light rays pick up the color of the metals, whatever color that might be. For a pure reflecting metal mirror, you would put:
    mask *= intersec.color; // intersec.color is white vec3(1, 1, 1) in this case, which reflects perfectly
    A list of different metal colors such as gold, silver, copper, aluminum, etc. is available on the internet. Multiplying by something other than white (1,1,1) will color-tint the reflection, which is what we need.

  2. call the Fresnel reflectance calculator function:
    nc = 1.0; // IoR of Air
    nt = ?; // whatever your metal's IoR is
    Re = calcFresnelReflectance(n, nl, r.direction, nc, nt, tdir);

  3. Get the reflection vector and send the new reflected ray on its way to continue tracing through the scene:
    if (rand(seed) < Re)
    {
    r = Ray( x, reflect(r.direction, nl) );
    // must nudge the ray up a little along the surface normal to prevent repeatedly intersecting the same surface over and over again..
    r.origin += nl * 0.1; // 0.1 is arbitrary precision, can be 0.01, or 1.0, etc. depending on platform
    bounceIsSpecular = true; // turn on mirror caustics if you want
    continue; // continue with next bounce loop iteration
    }
    else // transmit into the metal? not sure about how to handle this case
    {
    r = Ray(x, tdir); // tdir is the refracted transmission direction, calculated by calcFresnelReflectance()
    r.origin -= nl * 0.1; // this subtraction works for glass and water because we are going underneath the surface, but not sure about metal
    bounceIsSpecular = true;
    continue;
    }

As you can see, it doesn't make much sense sending the transmitted ray into the metal, it won't ever escape incorrect self intersection. It works on glass and water because after the ray pops through, it continues tracing, but light rays in nature don't go through slabs of metal, not that I'm aware of anyway. So it might be a moot point to send the ray on this transmitted path.

I will read the links you provided and will return soon with hopefully more info. Thank you for the links!

Hi again, I may get into trouble for saying this, but I do not agree with the handling of metal in the link you provided, nor the resulting images (although they are pretty to look at). What they have done there is basically provide a clear coat on top of metal. This is what cars do, next time you are driving, take a close look at cars' reflections - are they white pure reflections or color tinted based on the car color? the answer is based on how old the car is and when is the last time it was washed/polished? A new, clean, polished car will have a white reflection, like water at an angle, no matter if the car itself is painted black or red or dark blue. This is due to the clear coat on top, which is non-metal. Non-metals, like plastic, glass, water, etc. reflect a perfect white reflection, they do not color the reflection. They can't due to the laws of optics. You can use a still lake's reflection at a large angle to comb your hair and brush your teeth in the morning, even if the lake is filled with muck. If you try this with a used copper coin, you will have a hard time telling if your teeth are white enough - they never will be! This is because all metals tint the reflection, again it can't be any other way due to the laws of optics. Back to the car example, if you remove the clear coat polish, and let the car age sufficiently, it will go back to color tinting the reflections (assuming it is made out of metal, and not a fiberglass/carbonite space-age material race car or something).

Here is a nice example of a photo of 2 real metals, copper and aluminum side by side making a nice snare drum. Notice no matter what the angle or brightness of the lights, the copper never achieves white reflections. The aluminum hoops and hardware on the other hand easily achieve almost pure white reflections, like mirrors do.

Polished Copper and Aluminum snare drum

Sorry for seemingly ranting, but I think the PBR metalness link is slightly misleading - it should read 'metal with clear coat polish', or something more physically accurate. If you are still wanting this effect, then the closest thing I came up with is my CARCOAT material that is featured in the Switching Materials demo. Here you can adjust the IoR based on how much polish you want (higher IoR = higher white plastic-looking reflection), versus the metal underneath which tints the reflections like in nature, vs amount of more modern diffuse fiberglass(or some space-age material substance) that provides color bleeding (maybe for a modern race car or something). That should give you an idea of how to handle IoR clear coat on top of a metallic object, and still use all the reflected rays properly (no transmitted rays are necessary).

Hope this helps, I will be back later about the metallicRoughness map issue. Thanks!

Thank you once again.

At first, I was not able to understand how to achieve different materials. Everything makes much more sense now. I think I can try experimenting with them and see what can I get out of it.

Completely understood what the Car coat is and how changing refractive index is changing the reflection tint.

About the second thing on metallicRoughness map, may be the way three.js is using it, is just a shader which gives it a better look than its previous implementations of materials which does not need implementation of actual physical concepts.

So I don't know if it would be worth looking into that for you. One last question, how difficult you think it would be to implement bump/normal map and image radiance map?
Image radiance map: https://www.youtube.com/watch?v=WNQk4UM-L-w

Also, I was not able to understand these two lines:
weight = sampleQuadLight(x, nl, dirToLight, quads[5], seed);
mask *= clamp(weight, 0.0, 1.0);

About setting the color, when I do mask *= vec3(1.0, 1.0, 1.0) it colors the whole object instead of just changing the color of the reflection. I was wondering if I am missing something. I am using the CARCOAT property.
Setting mask *= vec3(1.0,1.0,1.0)
image

I implemented the car coat, but what I saw was it looks more plastic than metal:
CAR COAT:
image

SPEC:
image

Hi @ndhabrde11 , can you try the following modified CARCOAT surface handler in your shader? I think this is what you are looking for. You can dial up the amount of clearCoat white (1,1,1) reflection by just increasing the 'nt' amount. If this value is high enough, the rays will bounce right off without color-tinting the reflection, like in the metalness link you were looking at, for a polished car or plated brass look. If the nt value is low enough on the other hand, it will let the rays through and interact with the metal underneath, which will color-tint the reflection, like in the real world for all metals.
Btw, no need to manually set the mask. It starts the bounces loop out as white (1,1,1), and then just let it do its thing. If the ray bounces off of clearCoat, or glass, or water, etc., the mask passes right through without getting changed. However, if the ray interacts with metal, diffuse, or transmits through a refractive surface like glass or water, it picks up the color by multiplication of the mask times the intersec.color, whatever that might be. The mask value is the ray's contribution to the color of the pixel. accumCol is the final pixel color value, taking into consideration the mask (whatever it happens to be after all those bounces!) times the emissive color/power of the light source that the ray ended up hitting last.

`if (intersec.type == CARCOAT) // Painted Metal with ClearCoat on top
{
nc = 1.0; // IOR of Air
nt = 1.4; // IOR of Clear Coat, higher number equals more white 'plastic' reflection
Re = calcFresnelReflectance(n, nl, r.direction, nc, nt, tdir);
Tr = 1.0 - Re;

// choose either specular clearCoat reflection (not tinted) or specular metallic reflection (tinted)
// clearCoat component
if (rand(seed) < Re)
{
// in this case, mask is not changed, it is a pure 'white' (non-tinted) reflection
r = Ray( x, reflect(r.direction, nl) );
r.origin += nl * uEPS_intersect;
continue;
}
// metallic component
mask *= intersec.color; // this is the color-tinting - set your metal color (intersec.color) to the desired reflection color, available on the internet in float RGB form (0.0-1.0, 0.0-1.0, 0.0-1.0) for most common metals
r = Ray( x, reflect(r.direction, nl) ); // same direction as above ray reflection, but with color-tinting now
r.origin += nl * uEPS_intersect;
continue;
} //end if (intersec.type == CARCOAT)`

Argh Github I can't get the darn code to print in a formatted manner, even when using their 'insert code <>' button above. Oh well, hopefully you can add correct white space in your editor.

Regarding the 2 lines that you didn't understand, I'll try to walk you through the process without getting to deep into the theory, because I don't know your level of expertise and I don't want to seem pedantic. So here it goes: If we're just ray tracing specular surfaces like mirrors, glass spheres, water, shiny metal objects, etc. like they used to do in the early days of ray tracing (late 1970's and early 80's), then you can just bounce the rays around and pick up the color when it hits metal (according to real-world metals and optics laws), or pick up the color when transmitting through surfaces like glass and water, which gives them their hues. Otherwise the rays just bounce around forever and you eventually have to manually stop the process, or crash your computer (ha).

Now if we want diffuse surfaces in our scene, like a wall, or clothing, etc., we run into a big problem, because in the real world light rays are not only colored by diffuse objects, they leave the surface in a continuous hemisphere shape oriented around the surface normal (nl) just above the diffuse surface. In reality this is a continuous function with light rays entering and exiting over the entire hemisphere, adding all the possible light sources and directions together to give the surface its color and lighting from all possible angles at once. There's not enough horsepower in our greatest super-computer to calculate and integrate all possible lights and colors even over that one little hemisphere, and that's just one diffuse piece where the tiny ray hit!

Some bright pioneers like Kajiya in 1986 decided to combat this seemingly intractable problem by introducing Monte Carlo methods (named after the gambling region because it is based on randomness, statistics, and probability). Instead of waiting around forever for a tiny diffuse hemisphere blended color to be calculated and integrated, we just simply choose a random ray direction in the hemisphere and send it off on its merry way through the scene. Now as you can imagine, this will produce a noisy, unfinished, incorrect color for the nice diffuse blended color that we wanted in the first place. But if you shoot enough random rays and average the result (like coin toss or dice experiments), it will converge amazingly on the physically color-correct answer! There was one more issue with timing, because even with this cool new method, the early renderers had to wait around for 1000's of rays to be sampled on old slow hardware.

So some smart pioneer thought of direct light sampling, where more rays are 'artificially' directed towards the bright light sources, so we can more quickly exit the bounces loop (in path tracing, once a ray hits a light, you're done basically and you can exit). This trick is called importance sampling, because we are sampling just what is important in the scene and what will contribute to the final color and lighting the most: the lights. So all of this brings us to those 2 lines of code. The function in the first line artificially picks a random point directly on the quad light surface (could be a sphere light too, but then you have to use sampleSphereLight() ). Then it returns not only the new direction for the ray to take (the dirToLight vector), it also returns a weight (of float type). Now why do we need a weight? Well, if you just artificially send huge amounts of rays to the lights, your image would overexpose and eventually become all white. This is because we cheated by sending the rays to the more important things in the scene, mainly the lights. So to offset this 'cheating' we down-weight the result by multiplying the sampled light emission by the weight. The clamp is just making sure that the function didn't return a bogus value (remember probability states that there must be a 0.0-1.0 chance of an event occurring, no more and no less (negative probability is undefined in the theory). In other words, we are down-weighting the light emission result by the probability that the diffuse ray would have found the light source on its own without our cheating help (the bigger and closer the light is, the higher that weight turns out, and the inverse holds true as well). So this weight value which usually turns out pretty low, like 0.1, correctly down-weights and offsets the fact that we sent more rays artificially trying to get a final color answer more quickly.

Now all this being said, you only need these 2 lines for diffuse surfaces, and since we just want metal with clearCoat on top (maybe not so much a fiberglass car that would have a diffuse component), we don't need to worry about it for your metal surfaces - which is why I streamlined the CARCOAT function above. It actually is technically no longer a CARCOAT now, but a METALCOAT (ha). If you want you can update the enumerators for the different material types located near the top of my pathTracingCommon.js file. Just add a higher number than the highest number and type #define METALCOAT 30 (or whatever high unique number you want). then you can use that METALCOAT anywhere in the glsl file and it will have a unique identifying number.

Still reading about the metallicRoughness issue. I promise I'll get back to you on that! To tell you the truth, I just recently came to grips with all this PBR materials in games stuff - I'm still learning about it myself and am far from being an expert in that subject.

Hope all of this helped! Be back soon...

btw those are beautiful images, it helps me better understand what type of surface you are wanting. Thanks for sharing! :)

About the second thing on metallicRoughness map, may be the way three.js is using it, is just a shader which gives it a better look than its previous implementations of materials which does not need implementation of actual physical concepts. So I don't know if it would be worth looking into that for you.

Hi again,
Yes as I understand, the way that three.js and other engines like Unity, Unreal, etc. use the various maps is in specialized PBR shader functions to give materials a more plausible real-world look, that obeys the laws of physics such as conservation of energy (you can't reflect more or brighter light than you put in).

If you take a look here, you'll see the various PBR functions, especially the ones that have GGX in the name:
PBR shader source for three.js

Three.js and all the other rendering/game engines don't actually path or ray trace these materials, but the PBR functions, after all their complex calculations are done, spit out a plausible pixel color value as if they had taken the time to path trace the surface. PBR can get you close, but only so close, to real path tracing like we do here. The main reason is accurate optical reflections are somewhat doable in screenspace, but expensive for traditional rasterized graphics, and path-traced color bleeding (which requires global scene access for every single triangle in memory, either visible to the camera or not, from Monte Carlo integrated surface gathering (as discussed above) is out of the question!

To combat this shortcoming of traditional rasterized graphics, complicated, finicky hacks like cascading shadow maps, screen-space reflections, screen-space ambient occlusion (SSAO), light probes, light maps, spherical harmonics, etc,. are employed in hopes of getting a little closer to what our little path tracer can do in under a hundred lines of simple ray bounce code! So, although PBR shader functions look good and perform fairly well for traditional rasterized graphics pipelines, they are almost unnecessary for our purposes. The only useful components of those PBR materials are the actual textures: the diffuseMap (sometimes called the albedoMap) which gives base color and is absolutely necessary for all rendering engines, the normalMap which is very useful in all engines and tracers because it helps with lighting calculations and ray bounce directions (in our case), the emissiveMap (useful for tiny lights on the damaged helmet model for example), the ambientOcclusionMap (AO) is unnecessary because we get the model's self shadows for free just by the act of ray tracing, and finally the metallicRoughnessMap which gives the metal vs. no-metal info as well as how rough the surface is of the model at that exact location, and sometimes this map is combined with AO and stuffed into the various .r, .g, .b, and .a channels of the same texture, making an 'uber' metallicRoughnessAmbientOcclusionMap, but as mentioned, we can safely ignore any extra info like that on those particular channels.

To see how to use these various textures, take a look at the Animated BVH model demo that loads the damaged helmet with its PBR material textures.

One last question, how difficult you think it would be to implement bump/normal map and image radiance map?

This last question is a little tricky to answer and is based on a subject that I don't fully understand yet, to be honest. But I'll try:

Firstly, as mentioned, we already use normal maps in our ray/path tracing. Again, take a look at the Animated BVH model demo for an example of how to load in and trace/use a normal map for lighting/ray bounce calculations. Also, if you just want a pure example of normal map loading and application to a simple object like a sphere, please take a look at my TheCompleatAngler demo based on the first-ever ray traced animation (frame by frame rendering for hours at a time for each frame) movie by Turner Whitted in 1979. I have no idea how he did a normal/bump surface back then in 1979, but I employed a more modern technique of using a texture normalMap. If you look at the yellow metallic sphere that is circling around the glass sphere, it looks as if it has checkered crevices on it, which is the great illusion cheaply afforded us by using normalMaps.

Although I know how to use the normalMap and ray trace it for bumpy reflections and use it for bright vs. dark lighting calculations, I don't quite understand how the texture data is extracted from the image (which looks like a blue, purple, green ghost outline image) and then converted from tangent space (?) to world space. Frankly I gave up trying to understand the math in the perturbNormal function and just copied it from three.js' shader library (ha ha). I know that the colors in the weird-looking texture vaguely resemble vector directions in space, giving hills, valleys, ridges, edges, etc., but it is not as simple as that - you have to deal with tangent space as it relates to the tangent vector perpendicular to the surface normal. Oh well, maybe I'll understand it in the future. ;)

The bump map is usually looked down upon these days in favor of the more detailed normal map. Bump maps are only one channel, usually a grayscale from 0.0 to 1.0 floating point (or 0-255 unsigned integer) on the red (.r) channel of the texture. A typical use case is called a heightmap or displacementMap and is used to deform (raise and lower) vertices of a plane that has a bunch of triangles sandwiched together side by side. In traditional rasterized graphics pipelines, this will create a nice mountain or landscape shape rising out of the flat plane, with 0.0 (or 0) being a flat vertex, and 1.0 (or 255) making the highest vertex on the peak of the mountain. The other more common/traditional use is to have a bumpy pattern such as pores on human skin, and then sample the bump texture 4 times (2 up and down and 2 left and right) to create a gradient normal vector that helps with knowing how bright or dark the lighting is at that point on the texture. Higher parts would typically get brighter, and lower parts would be the cavities, so they typically get darker. The result will look somewhat like a normalMap would, but with less accuracy, because of the discontinuities in the texture and sampling approximation. normalMaps skip all the sampling stuff and just give you what you want - a per-pixel exact normal, at the obvious cost of 3 channels (r,g,b for x,y,z, components of the normal vector or s,t,u in tangent space?) vs 1 red channel (.r) for a bump map, considering storage and amount of data/texture reads were more of an issue back in the old days.

However, that isn't to say that we can't use simple grayscale bump/height maps in ray tracing. There's a whole other field called ray marching and, if you set things up right, you can actually raymarch (small step by step distance approximations) through the texture to create the very same mountain landscape at the largest scale, or pores on human skin at the small scale, with no triangles or vertices at all! This sounds great, and I actually use this for my clouds in the outdoor OceanAndSky rendering and TerrainRendering demos, but here's the catch: ray marching is much more expensive and finicky to remove artifacts than traditional optimized ray-shape or ray-triangle intersection methods. The worst case is when half the GPU is looking at the sky and the other half is slowly stepping through a detailed mountain terrain = major thread divergence which is a bottleneck for GPUs and cuts the frame rate in half from 60 to 30 fps or worse!

So, yes you can use bump maps (with ray marching) and normalMaps (with the perturbNormal function that I 'borrowed' from three.js) - please refer to the demos I mentioned. But just be aware of the tradeoffs - speed, memory, size and amount of needed textures, complication of shaders, etc.

Finally, have a look at the gltfViewer demo, namely the Gltf_Viewer.glsl file for how to use an HDR irradiance map - yes we already are using irradiance maps on this project. Briefly, instead of having rays shoot into the sky and returning a solid color, or physical sky/sun calculations like in some of my outdoor demos, you just load in an equi-rectangular HDR image (which are freely available on the internet) and use the Get_HDR_Color(Ray r) function (which I also 'borrowed' from three.js' shader library, ha). You call this when the Ray r hits the sky or when t==INFINITY . This function takes the ray's outgoing direction towards the sky and uses that to look up an exact uv location on the texture, which looks like the texture is perfectly wrapped in a spherical shape around the scene that is infinitely far away (it does not get closer and farther as you move the camera).

Hopefully I have answered all the questions you were wondering about. Sorry if it seems like I was trying to lecture with my lengthy posts, but I just love all this ray/path tracing stuff and it's easy to get lost going down the rabbit hole, lol! If you need more info, feel free to ask! :-)

-Erich

Wow! Metal coat worked flawlessly!
image

And yes, how could I forgot this, you have used normal map in the Bi-directional difficult lighting sample.

I actually went through all the samples by the way. The reason I was not able to notice the Skybox/ HDR image in the GLTF Viewer is because I was only looking at the model all the time 😁.

I would say Billiard Table is the best one. One cannot tell the difference between that and an actual billiard table.

Thank you for the detailed explanation on how things are actually working, it is overwhelming. The lengthy posts actually covered everything about that topic, So I didn't even had to revert back to you with any questions.

Hey @ndhabrde11
That's great news! Beautiful image, thanks again for sharing! I'm so glad we found the material that you were looking for. Mmm.. looking at that pretty rendering, I think I will be adding a METALCOAT material to my own project in the future! ;-)

Yes I totally forgot about the Bi-Directional difficult lighting demo that uses a normal map for the hammered steel teapot - thanks for reminding me! That and the Whitted_CompleatAngler demo (that uses a simple normal map on a sphere) should give you an idea of how to use normal maps while ray tracing to give the illusion of depth and more detail without changing shape geometry, or triangle geometry (which would cost too much more in the ray intersection department). Normal maps are definitely the way to go if you need to cheaply add surface details.

I'm glad you understood my explanations of how things are working - yes it can be overwhelming. I started this whole project by being fascinated with Kevin Beason's smallpt - how did he get those beautiful images from 100 lines of C++ code?

Therein lies the trap to getting into ray tracing. If you just want to ray trace like they did in the old days with no diffuse color bleeding or blending, you can get away with literally 50 lines of C code (loop through the pixels on the screen, check for ray-sphere intersections, terminate the tight loop after so many bounces, gather the pixel colors, save to a simple human-readable .ppm image file format), and you have a working renderer! Kevin Beason's is slightly longer at 100 lines (ha) because he actually is doing randomized monte carlo integration for diffuse surfaces, as discussed earlier, like we do on this project - but still, physical reality in 100 lines of code: Tell me more!

Well, if you want to render the final image to the screen instead of a .ppm file, your renderer grows, if you want to be able to fly a camera around in real-time, than your renderer grows, then if you want to intersect shapes other than spheres, your renderer grows more, and if you want to intersect triangles so you can load in gltf format models, it grows a lot more, and if you want different materials, it grows even more, and so on and so on... 3 years later and 10,000+ more lines of code later, here we are! Lol

But seriously, it can all seem overwhelming if you try to understand everything right up front, but instead if you take it a little at a time, understand basic ray calculations, understand pixel colors, understand shapes, etc. eventually it will become more manageable when you're trying to juggle it all in your mind.

Thank you for the suggestions and best of luck to you on your project!
-Erich