How to apply multiple shaders to one object via multi-pass rendering?
historydev opened this issue · 0 comments
As I understand it, I need to make a "screenshot" of the image to which the first shader has already been applied and write it to render_target2 in order to apply the second shader to it.
use macroquad::prelude::*;
#[macroquad::main("Post processing")]
async fn main() {
let mut render_target1 = render_target(screen_width() as u32, screen_height() as u32);
render_target1.texture.set_filter(FilterMode::Nearest);
let mut render_target2 = render_target(screen_width() as u32, screen_height() as u32);
render_target2.texture.set_filter(FilterMode::Nearest);
let material = load_material(
ShaderSource::Glsl {
vertex: CRT_VERTEX_SHADER,
fragment: CRT_FRAGMENT_SHADER,
},
Default::default(),
).unwrap();
let material2 = load_material(
ShaderSource::Glsl {
vertex: CRT_VERTEX_SHADER2,
fragment: CRT_FRAGMENT_SHADER2,
},
Default::default(),
).unwrap();
loop {
if screen_width() != render_target1.texture.width() || screen_height() != render_target1.texture.height() {
render_target1 = render_target(screen_width() as u32, screen_height() as u32);
render_target1.texture.set_filter(FilterMode::Nearest);
}
if screen_width() != render_target2.texture.width() || screen_height() != render_target2.texture.height() {
render_target2 = render_target(screen_width() as u32, screen_height() as u32);
render_target2.texture.set_filter(FilterMode::Nearest);
}
let aspect_ratio = screen_width() / screen_height();
set_camera(&Camera2D {
zoom: vec2(0.01 / aspect_ratio, 0.01),
target: vec2(0.0, 0.0),
render_target: Some(render_target1.clone()),
..Default::default()
});
clear_background(LIGHTGRAY);
draw_line(-30.0, 45.0, 30.0, 45.0, 3.0, BLUE);
draw_poly(-45.0, -35.0, 60, 20.0, 0., YELLOW);
set_camera(&Camera2D {
render_target: Some(render_target2.clone()),
..Default::default()
});
clear_background(WHITE);
gl_use_material(&material);
draw_texture_ex(
&render_target1.texture,
0.,
0.,
WHITE,
DrawTextureParams {
dest_size: Some(vec2(screen_width(), screen_height())),
..Default::default()
},
);
gl_use_default_material();
set_default_camera();
clear_background(WHITE);
gl_use_material(&material2);
draw_texture_ex(
&render_target2.texture,
0.,
0.,
WHITE,
DrawTextureParams {
dest_size: Some(vec2(screen_width(), screen_height())),
..Default::default()
},
);
gl_use_default_material();
next_frame().await;
}
}
After playing around a bit, I found that the problem is in the position of the camera that records in render_target2, for some reason it is shifted strongly to the right and in order for the image to appear as it was, it is necessary to set the position to texture width / 2. and texture height / 2, and also divide by 2 the zoom of the camera that recorded in render_target1.
But why and should it work like this?
set_camera(&Camera2D {
zoom: vec2(0.01 / aspect_ratio, 0.01),
target: vec2(0.0, 0.0),
render_target: Some(render_target1.clone()),
..Default::default()
});
clear_background(LIGHTGRAY);
draw_line(-30.0, 45.0, 30.0, 45.0, 3.0, BLUE);
draw_poly(-45.0, -35.0, 60, 20.0, 0., YELLOW);
set_camera(&Camera2D {
zoom: vec2(0.005 / aspect_ratio, 0.005),
target: vec2(render_target1.texture.width() / 2., render_target1.texture.height() / 2.),
render_target: Some(render_target2.clone()),
..Default::default()
});
However, the quality of such an image is much lower, apparently I'm still doing something wrong, help me figure it out.