TSL nodes Composability / Stacking / Extends ?
Samsy opened this issue Β· 14 comments
Description
Using the legacy system for shaders, we could replace parts of the shading with some other parts :
Example :
source: "vec4 diffuseColor = vec4( diffuse, opacity );",
replace: "vec4 diffuseColor = vec4( diffuse, opacity * 0.5 );"
With TSL (Three.js Shader Language), the challenge is: ( Unless I missed something, or could not find a solution on the wiki or examples )
I cannot find a way extend/modify an existing node after it's been processed by the material's built-in computations (lighting, textures etc.), the only way now is to override the colorNode entirely.
Two key technical limitations:
the node isn't available immediately on material creation
No built-in mechanism to stack/chain node operations
Solution
Potential solution approaches needed:
- A node stacking system: material.whatevernode.stack(modifierNode)
- A way to reference the computed whatevernode value into the next stacks..
Additional context
No response
String substitution is counterintuitive with Node systems, one of the goals of its creation is to avoid these types of hacks, unlike a fixed code/pipeline, the Node system creates variables and code ββdynamically, so this code will be different with each modification, not always matching the source
string. Probably if you do everything using .colorNode
there may be something wrong with your Material configuration, the various available Node inputs such as opacityNode
, normalNode
, metalnessNode
, etc. are created to avoid this approach. In the same way we have Nodes to take care of the Lighting Model and the Lighting System.
@sunag I think the fundamental use case is augmenting and adding new surface qualities on top of an existing material - eg a user provided one, one loaded through a library, instantiated through three.js, etc.
In my visualization work it's been common to take pre-made materials and add different "layers" of effects on them such as topographic lines, clipping (alphatest / transparency / discard), vertex displacement, and so on and blending it on top of existing effects provided by the pre-existing material. Previously you could do this by augmenting fragment or vertex strings but as you say this wouldn't be viable any more.
I'm still not so familiar with the node material system and perhaps there's already a way but intuitively I would expect to be able to "unhook" whatever is feeding into the color node (or any other), then hook the old and new color effect into a blend node, and then hook that blend node back up to the color node. I'm also curious to hear how this might be done, though.
@gkjohnson said it all !
Main trouble here is to extend existing nodes, not " overriding " nodes
There are currently two main ways to extend a Material/Node, the first is through its inputs. Previously, to modify an existing material code we needed to modify the shader by injecting strings, now all we need to do is add a node in the desired input with .colorNode
, .alphaTestNode
, positionNode
etc. Almost all inputs of WebGLRenderer
Materials now have the *Node
suffix where extensions can be performed.
@brunosimon recently made a great explanation about
https://youtu.be/cesPK0kYkyE?t=247
The management of the declarations or the code sequence are generated dynamically, which allows to easily move and reuse graphics in different material processes. For example, a topographic lines effect could be created in a node function like Fn( () = ... )
, the user could import this function and use it in colorNode
, or in opacityNode
or in metalnessNode
if it is a MeshStandardNodeMaterial
just by setting material.metalnessNode = topographicLines()
.
The TSL functions allow access to material and geometry properties, uniforms and attributes, including native code. Each function can generate its own uniforms
, attributes
, etc, and this will only be assigned if it is used on the Material. So concerns about global modifications to the Material to add punctual effects are not necessary.
inline displace
const displacementMap = texture( map/*, uv()*/ );
const displacementScale = uniform( 1 );
// custom displace
material.positionLocal = positionLocal.add( normalLocal.mul( displacementMap.r.mul( displacementScale ) ) );
using Fn
const displaceIt = Fn( ( [ displacement, scale = float( 1 ) ] ) => {
return positionLocal.add( normalLocal.mul( displacement.mul( scale ) ) );
} );
const displacementMap = texture( map/*, uv()*/ );
const displacementScale = uniform( 1 );
// custom displace
material.positionLocal = displaceIt( displacementMap.r, displacementScale );
using Fn with embeded uniforms
const displaceIt = Fn( () => {
const displacementMap = texture( map/*, uv()*/ );
const displacementScale = uniform( 1 );
// some other logic here
return positionLocal.add( normalLocal.mul( displacementMap.r.mul( displacementScale ) ) );
} );
// custom displace
material.positionLocal = displaceIt();
Another way would be to extend the Material classes. If you want to create a Material with a different lighting model, this is also possible by extending the classes and methods such as setupLightingModel()
, there are many others like setupClipping()
, setupEnvironment()
, ... This is how we built all the Materials in WebGPURenderer
.
Nodes can also be extended to create more effects that require rendering manipulation like bloom()
Hey @sunag thanks for this answer and explanations, everything of this is understood,
However, as I tried to explain, I think there is one crucial case missing in here that @gkjohnson explained better than I did previously
But I guess an example is better :
In the legacy system, we could achieve this with a simple material extension:
// JSFiddle example: https://jsfiddle.net/Ltnhcvk9/52/
material.onBeforeCompile = (shader) => {
shader.fragmentShader = shader.fragmentShader.replace(
'#include <colorspace_fragment>',
`
// Modify final color output here
gl_FragColor.rgb = hue_shift(gl_FragColor.rgb , time);
#include <colorspace_fragment>
`
);
};
This approach had several benefits:
It could modify the final color output after ALL material calculations were done ( before the FOG calcs )
It was completely agnostic to the material's properties (maps, baseColor, textures etc..)
It worked regardless of the material's lighting model or other features
How to achieve this effect in TSL ?
There are multiple things here that I can think of but
colorNode isn't viable because, it executes before lighting calculations, would need knowledge of material properties ( textures etc.. )
fragmentNode isn't suitable because, it requires reimplementing all material calculations, defeats the purpose of a lightweight color modifier, but is also tied with the actual material properties
In the case a colorSpaceFragmentNode exists ( which I am not aware of, and not even sure this matter in webGPU stills ) I'd still need to get access to the current node value of the colorSpaceFragmentNode to add the plugin before it, write the value in the computed value, and pass it on to the next node
Which approach would best align with TSL's architecture while maintaining the extend simplicity of the legacy system?
I bumped into this problem while porting an entire code-base of 200+ shaders into TSL, and got troubles on 90% of the materials that are extending built-ins threeJS materials, most of the materials are using simple extends like this example or complex ones, but also most of the times multiples extends stacked on top of the others to achieve complex shading while not being aware of previous vertex or surfaces calculations
I wish there were a way to queue nodes before / after other nodes, and pass on the output value from one to the next one so it each sequence could be modified by multiples nodes instead of one.
For a true extend of a built-in material, It is not needed to re-write a colorNode, It is needed to extend a colorNode before or after without being aware of the actual content of the built-in colorNode and used parameters ( like maps etc.. )
Another example
Let say you need to introduce a opacity fade with a discard when an object is too close from the camera, how do you proceed ? Writing a opacityNode, but then, even writing a opacityNode causes troubles because it looses all the built-in opacity calculations
Again, this needs to get knowledge of the actual material, and won't work straight regardless of the actual material property, but a stacked opacityNode that retrieves the output value of the built-in opacity node could then process the new output
Possible solution or not
Each node would accept an array of node function, on which you could select a node.defaultColorNode that is filled properly per material on which I can add before, or after a new node to extend the behavior
Each stacked node would pass the output of its own function into the next one for chaining computation
node.colorNode = [
node.getDefaultColorNode(),
node.pluginColorNode()
]
Using TSL since 4months intensively for production, I also felt this need.
I think a good approach / quickfix at the moment would be :
- MaterialNode made 100% of nodes, even if the value is simple ex:
this.opacityNode = float(1)
- No other logic added into the material, they are just all chaining to create the glsl/wgsl final shader
- Add a basic doc ( really important as it create a lot of frustrations )
-- with the list of nodes and what they do.
-- with the list of materials and what nodes are accessible.
-- just saw this commit, thanks for the effort @Mugen87
This way we can extends them in an easier way and get the values previously returned.
The replacement is done for optimization reasons. Once the user uses a .colorNode
then .color
is not needed, as well as .map
, and this happens with all similar entries. It would be a computational waste to multiply the entries when the user could get away with using constants
in the node entries instead of uniforms
. But it is something simple once documented, I will provide it for this release.
You can add a Discard
node to any of the TSL entries using Fn.
Example:
const myOpacity = Fn( () => {
If ( a.lessThan( b ), () => {
Discard();
} )
return c;
} );
material.opacityNode = myOpacity();
To control the output of materials, we have the output
node and the .outputNode
input, this could be like:
Example:
material.outputNode = hue_shift( output, time );
Color Space is always applied in post-processing, whether internal to the renderer or explicit through the PostProcessing
class.
Thanks again to @Mugen87 for starting the documentation, and I will look into improving the TSL documentation as well, most of the issues currently are related to that.
Hey @sunag thanks a lot for the answer,
The part :
"The replacement is done for optimization reasons. Once the user uses a .colorNode then .color is not needed, as well as .map, and this happens with all similar entries." is exactly the trouble
Before setting the nodes :
After setting the nodes :
The left mesh material in the fiddle has a map property, but is overriden when setting the colorNode
The right mesh material in the fiddle has an alphaMap property, but is overriden when setting the opacityNode
- We can not extend an internally built-in computed node
- How to give context awareness of the properties when re-writing a node ?
- How to re-write a node, without dropping all it's initial content ?
- There is no way to get access a node filled with the internal built-in of the material that would allow us to actually extend the behavior
Building a generic plugin for any material regardless of the property of the material is not possible at the moment, the only way to do that right now would be to go through all the property of the material and manually re-write the built-in node code to finally extend it manually or, having a way to get access to the current node builtin value output to chain it with another node for computation
100% of advanced users I know are using legacy code injections into built-in Threejs materials and made hundreds or thousands of materials that we could not port into TSL
In this case, the injection could still be used with the material*
nodes and preserving both functionalities:
Example:
material.colorNode = materialColor;
material.opacityNode = materialOpacity;
material.metalnessNode = materialMetalness;
// ..
This allows you to inject the properties defined in the Material at any time in the Node, for example:
material.colorNode = hue( materialColor.rgb, time );
Basically it follows the same property name of the material as suffix.
I will describe these names in the TSL Spec in this release as well.
https://github.com/mrdoob/three.js/wiki/Three.js-Shading-Language#nodematerial
List of properties below:
three.js/src/nodes/accessors/MaterialNode.js
Lines 394 to 433 in 0c45156
Hey @sunag
I've been experimenting a bit, and testing capabilities of the extends using let say : materialNormal
In this material :
-
a normalNode which rotates on Y axis the materialNormal
-
colorNode which reads the materialNormal
Observations :
The normalNode rotates the normal, which works because we can see the shade rotating correctly
The colorNode does not read the rotated normal ( if you uncomment the line 99 to rotate the colors, this would be the expected effect )
What is the relationship between materialNormal, and normalNode ?
For an extend of a built-in material, I guess we should expect to modify the normalNode, and then this would extend to the other nodes built-in nodes, unless the normalNode is done on the fragment stage, which become expensive
Is there a way to transform the normals on the vertex stage at geometry level, to be injected then on the built-in nodes in the fragment ?
Like here, an example of rotating cubes, on which a directional light affect the coloring output, but the normals are wrong, since they are not rotated as well,
How would we correct the normals by using the same matrix of rotation given by rotateY( time ) in the vertex stage ?
Many thanks
How would we correct the normals by using the same matrix of rotation given by rotateY( time ) in the vertex stage ?
In this case, you would have to rotate the normalLocal
before returning to the rotated position.
https://jsfiddle.net/qbck6Lg1/1/
material.positionNode = Fn(() => {
const pos = attribute('position', 'vec3').toVar();
const offset = attribute('offset', 'vec3').toVar();
const rotMtx = rotateY( time.add( hash( offset.x.add(offset.y.add(offset.z)))) );
normalLocal.assign( rotMtx.mul( normalLocal ) )
return rotMtx.mul(pos).add( offset );
})();
Hey @sunag Thanks a lot for the replies, the normalLocal trick is working
I bumped into another extend problem here, this is an override of the fog_pars_fragment on the legacy renderer.
This is a fog effect that turns the current color first into a ' fadeColor ' then transitionning to the fogColor, problem here, this is using the current gl_FragColor.rgb value
The approach to achieve this would definitely be a fogNode, trouble here is ( unless I missed something ) I cannot get access to the " current color " affected into the output variable when it enters in the fogNode, is there any way to achieve this in the current system ?
#ifdef USE_FOG
#ifdef FOG_EXP2
float fogFactor = 1.0 - exp( - fogDensity * fogDensity * vFogDepth * vFogDepth );
#else
float fogFactor = smoothstep( fogNear, fogFar, vFogDepth );
#endif
// this is where the trouble is =>
vec3 closeColor = gl_FragColor.rgb; // Your original fragment color
// <<= trouble
vec3 midColor = midFadeColor;// The mid color you want
vec3 farColor = fogColor; // The far color is the fog color
// Adjust the fogFactor ranges for each color step
float midFogFactor = smoothstep(0.0, 0.8, fogFactor); // Adjust the range as needed
// Blend between the three colors based on fogFactor
vec3 blendedColor = mix(closeColor, midColor, midFogFactor);
blendedColor = mix(blendedColor, farColor, fogFactor);
gl_FragColor.rgb = blendedColor;
#endif