This library provides a very simple texturing system for testing image processing, convolution-like shaders using a multi-framebuffers approach.
npm run build
or
yarn build
to bundle your application.
npm run watch
I assume the "http-server" simple node server is used here
npm run build
cd dist
http-server . -o
The library builds a single file called wtex.js which exposes an object called wtex. The wtex.texturing function takes as input an options object and draws a fullscreen canvas with your texture. The input image could be an image or a video. A DOM element is built using the input config you provide and it's used to feed the main texture.
{
vertex_shader: "<your vertex shader code as text>",
fragment_shader: "<your fragment shader code as text>",
config_path: "/config/config.json", //the path where the config json file should be downloaded, defaults to "/config/config.json"
WIN_LOADED: true, //a boolean which says if a window.addEventListener('load') should be used before start or not
frame_update: (current_program, gl, opts) => { /* add any logic*/ } //a function called, if provided, at each draw iteration before actual drawing command
}
{
"input": {
"paths": ["/img/tree.jpg"], //the array of paths where the image should be retrieved, multiple image paths are allowed
"domQuery": "#imgDomId" //OR as an alternative, the dom query to select a dom element to read from
"isVideo": false //boolean flag which specifies if the input is expected to be a video or an image
},
"output": {
"domQuery": "#canvas" //the dom query to select a dom element to write to
},
"dont_create_base_texture": false, //prevent the use of a base texture (because your test program will provide it itself)
"has_framebuffer": true, //use framebuffer objects
"framebuffers_n": 1, //number of framebuffer objects to use as a chain
"framebuffers_offset": 1 //first fbo will have an active index of "n"
}