gregtatum/gl-engine

Shader, material, augment architecture

Opened this issue · 3 comments

Intent

So the API that I'm trying to make for materials is as follows.

var material = FlatMaterial({
        color: [1,0,0]
    })
    .use(FogAugment, {
        color : [1,1,1],
        near : 10,
        far: 100
    })

One issue I've had with working with and understanding other material systems is having one huge configurable object with lots of different settings. The duplication of code and documentation between different material types makes it hard to understand as a user and developer. I'm also trying to avoid inheritance in favor of composition. This is all the user-facing interface which I'm fairly happy with. It feels easy to understand and use, but internally it's still creating a big configured material object that has all of the settings to render it.

Shader Architecture

I don't have as much experience with developing my own browserify and glslify static transforms, so I'm wondering if there is something more clever I can do here, but the following is my current approach. Currently each type of material (Flat and Lit) each have an uber shader. At a high level I have a flat.frag and flat.vert that have import statements for the top level variables, and the main execution code.

flat.frag

precision mediump float;
#define SHADER_NAME flat material

#pragma glslify: import('./vars.frag')

void main() {
    #pragma glslify: import('./main.frag')
}

vars.frag

The vars file looks like this

uniform vec3 uColor;
uniform float uOpacity;

#pragma glslify: import('../common/camera/camera-struct.glsl')
#pragma glslify: import('../common/camera/camera-vars.frag')
#pragma glslify: import('../../augment/fog/fog-vars.frag')

Then finally the fog-vars.frag has #ifdefs to filter out the code

#ifdef FOG
    struct Fog {
        float near;
        float far;
        vec3 color;
    };

    uniform Fog uFog;

    float calculateFog(
        const float cameraDistance,
        const float near,
        const float far
    ) {
        return 1.0 - clamp((far - cameraDistance) / (far - near), 0.0, 1.0);
    }
#endif

I then have a script in my npm run that will run glslify and spit out a fully built shader that is checked into the version control. So far this makes it easy to develop new features by being able to see the full file and how it works together. When I've been building new features I work in the main file then split out my code.

How I got here

The import directive preserves the naming of my variables, while require can mangle the names. Recently glslify changed the behavior to not mangle the names as much, but I'm concerned that there is a chance glslify will still rename my variables. I didn't really set out to design a ShaderChunk system, but I feel like this where I got to. I originally wanted it to feel more like a glslify native experience. The augments feel very dynamic and not static when using the API. I'm not sure if there is a way to harness the power of glslify to make this a bit smarter without resorting to all of the #ifdefs. Of course, I still want all of the examples to work out of the box in something like requirebin. I'm not sure if I have a great solution to this other than where I'm landing right now.

Avoiding ShaderChunk issues

Ultimately I want people to very easily dive into shader code, and then not have to worry about their examples breaking between updates. A big pain point I've had with other engines is how hard it is to get to shader code, and I want to make that easier. My thought so far is that if I want to do a custom flat material, then all I have to do is copy a small boiler plate and dive in. Then I want a certain amount of confidence that my shader won't break between updates.

Custom shader example

    precision mediump float;
    #define SHADER_NAME flat material

    vec3 uCustomValue;

    #pragma glslify: import('./vars.frag')

    void main() {
        #pragma glslify: import('./main.frag')
        gl_FragColor.rgb *= uCustomValue;
    }

/cc @mattdesl @hughsk

This is a tricky one! Apologies in advance for the brain dump, not necessarily recommending any of this but food for thought :)

glslify in the browser

It's possible to bundle up glslify shaders in the browser using the same tools as glslbin, though to be useful for requirebin it'd need to be easier to set up.

This means:

  • Having a package which can take a shader string and grab the output from a server.
  • Having a server hosted somewhere to take these requests, and be the default host for said package.

I have my reservations about promises, but if using them requirebin shaders could end up looking like this:

const glslify = require('glslify-remote')
const vert = glslify(`...`, { inline: true })
const frag = glslify(`...`, { inline: true })

shader(vert, frag)

This could then be toggled on/off via the transform and easy to switch over provided the shader isn't generated dynamically. The problem then is that you still can't really modify the shaders without switching out #defines.

It's also possible (with some work on my part) to supply a list of packages to expose to a shader and bundle them up dynamically in the browser, though that pushes complexity onto the user:

{
  "name": "my-project",
  "glslify": {
    "bundle": [
      "glsl-*",
      "glam/lighting",
      "./lib/shaders/*"
    ]
  }
}
const glslify = require('glslify')

glslify(someDynamicShaderString, { inline: true })

Bridging GLSL and JS

Another difficulty here comes from augments being done in JS. One option would be to move this stuff over into the shader, e.g.:

// main.glsl
precision mediump float;

#pragma glslify: import(glam/fragment)

#pragma glslify: applyLighting = require('glam/lighting/simple')
#pragma glslify: getAlbedo = require('glam/albedo')
#pragma glslify: applyFog = require('glam/fog')

void main() {
  getAlbedo(vFragment, uMaterial);
  applyLighting(vFragment, uMaterial, uCamera, uLights);
  applyFog(vFragment, uCamera, GlamFog(vec3(0, 0, 0), 0.1, 100.0));

  gl_FragColor = vec4(vFragment.color, vFragment.opacity);
}
// fragment.glsl
#pragma glslify: GlamMaterial = require('glam/material')
#pragma glslify: GlamCamera = require('glam/camera')
#pragma glslify: GlamLight = require('glam/light')
#pragma glslify: GlamFog = require('glam/fog')

uniform GlamLight uLights[GLAM_LIGHT_COUNT];
uniform GlamMaterial uMaterial;
uniform GlamCamera uCamera;
varying GlamFragment vFragment;

Using the inout keyword in the getAlbedo/applyLighting/applyFog functions you can modify those incoming struct values:

float calcFog(float d, float near, float far) {
  return 1.0 - clamp((far - cameraDistance) / (far - near), 0.0, 1.0);
}

void applyFog(inout GlamFragment fragment, GlamCamera camera, GlamFog fog) {
  float dist = length(fragment.position - camera.position);

  fragment.color = mix(fragment.color, fog.color, calcFog(dist, fog.near, fog.far));
}

#pragma glslify: export(applyFog)
struct GlamFragment {
  vec3 position;
  vec3 normal;
  vec3 color;
  vec3 uv;
  float opacity;
};

#pragma glslify: export(GlamFragment)
struct GlamCamera {
  vec3 position;
  vec3 direction;
  mat4 view;
  mat4 projection;
  mat4 model;
  float near;
  float far;
  float fov;
};

#pragma glslify: export(GlamCamera)

You could use the above approach internally and combine it with the #define flag stuff. That way you'd be able to keep the current augmentation API but allow users to switch to the shader level when they need to without sacrificing flexibility or requiring them to rewrite everything from scratch.

struct constructors

Structs are a little messy. Because constructors are just a list of arguments, the user needs to update them manually. It'd be possible to add default values and named arguments with a transform:

// before:
struct GlamFog {
  vec3 color = vec3(1);
  float near = 0.0;
  float far = 100.0;
};

GlamFog({ color: vec3(0) });

Extending the language makes me feel bad though 😱 Can also be avoided by never calling struct constructors:

GlamFog fog;
fog.color = vec3(0);

I would love to write custom shaders that looked like this:

// main.glsl
precision mediump float;

#pragma glslify: import(glam/fragment)

#pragma glslify: applyLighting = require('glam/lighting/simple')
#pragma glslify: getAlbedo = require('glam/albedo')
#pragma glslify: applyFog = require('glam/fog')

void main() {
  getAlbedo(vFragment, uMaterial);
  applyLighting(vFragment, uMaterial, uCamera, uLights);
  applyFog(vFragment, uCamera, GlamFog(vec3(0, 0, 0), 0.1, 100.0));

  gl_FragColor = vec4(vFragment.color, vFragment.opacity);
}

I might even be able to do something like this...

var mat = LitMaterial().use( ShaderAugment, {
   fragSource: frag,
   vertSource: vert
})

mat.shading.fog.color = [0.5,0.5,0.6]

And then have it do some regex matching for features that are being used in the script source, apply the necessary augments, Then it would provide the defaults on the values. Or you could apply the augments manually ahead of time, then the custom shader would replace it.

I'll have to play around with refactoring the code a bit and play around with some of these ideas. I may just keep custom shader examples out of RequireBin due to the complexity of getting it to work in an environment like that. It sounds like juggling chainsaws :) I like the idea of RequireBin for new users to have a very low barrier of entry. I think if someone is able to get to the shader side it's not so bad to expect them to load up some local dev environment to play around with the examples.

The only thing I'm not understanding from above is what you're getting at with the bundling up dynamically part and where that is in the process.

{
  "name": "my-project",
  "glslify": {
    "bundle": [
      "glsl-*",
      "glam/lighting",
      "./lib/shaders/*"
    ]
  }
}

Just refactored flat materials. This will be how the user-facing shaders will be structured. I'm thinking on the documentation page you can copy/paste a working shader and customize from there. I think these will eventually be broken out into a more modular form to allow for proper semver support, so end users can have some more confidence in writing shaders that will continue working in the future or at least a documented upgrade path. Although I'm not ready to start breaking things out yet.

flat.vert

#pragma glslify: applyCamera = require('../common/camera')
#pragma glslify: import('./vars.vert')

void main() {
  applyCamera(
    gl_Position, aPosition, uModel, uCamera,
    vCameraPosition, vCameraDistance, vCameraDirection
  );
}

flat.frag

#pragma glslify: fogAugment = require('../../augment/fog')
#pragma glslify: import('./vars.frag')

void main() {
  gl_FragColor = vec4(uColor, uOpacity);
  fogAugment(gl_FragColor, uFog, vCameraDistance);
}

Augments

Augments like fog end up being a noop function if they aren't configured for the material. Probably a little bit of a performance hit, but it makes the code simpler to write. Not 100% sure about this approach.

#ifdef FOG
    float calculateFog(
        const float cameraDistance,
        const float near,
        const float far
    ) {
        return 1.0 - clamp((far - cameraDistance) / (far - near), 0.0, 1.0);
    }

    void applyFog(
        inout vec4 fragment,
        Fog fog,
        float cameraDistance
    ) {
        fragment.rgb = mix(
            fragment.rgb,
            fog.color,
            calculateFog( cameraDistance, fog.near, fog.far)
        );
    }
#else
    void applyFog(inout vec4 fragment, Fog fog, float cameraDistance ) { }
#endif