Plonq/bevy_panorbit_camera

Please add support for RenderTarget::Image()

Closed this issue ยท 9 comments

I draw my app GUI with bevy_ui, when I set the camera target of a Camera3dBundle to RenderTarget::Image(some_handle), and I spawn an ImageBundle (bevy_ui bundle) as a image texture to "capture" this Camera3dBundle with PanOrbitCamera , but the mouse events (mousewheel, mouse click) doesn't work for me. The only way right now to change the Transform of a Camera3dBundle is to Query Transform of this Camera3dBundle and set the translation of this bundle on my own. So is there a way to use PanOrbitCamera when the Camera3dBundle's render target is set to RenderTarget::Image (I know that RenderTarget::Window can work)?

Here's some of my code:

...some code...
let image_handle = generate_render_target_image(&mut images, 1000, 1000);

commands.spawn((
    Camera3dBundle {
        camera: Camera {
            order: camera_order_1,
            ..default()
        },
      camera_3d: Camera3d {
          clear_color: ClearColorConfig::None,
          ..default()
      },
      ..default()
  },
  render_layer_1,
));

// Spawn a ImageBundle to "capture" Camera3dBundle.
commands
.spawn((
    NodeBundle {
        style: Style {
            size: Size::new(Val::Px(1600.), Val::Px(1000.)),
            ..default()
        },
        background_color: BackgroundColor(Color::AZURE),
        ..default()
    },
    render_layer_1,
))
.with_children(|parent| {
    parent.spawn((
        ImageBundle {
            style: Style {
                size: Size::all(Val::Percent(80.)),
                ..default()
            },
            image: UiImage::new(image_handle.clone()),
            ..default()
        },
        render_layer_2,
    ));
});

commands.spawn((
    MaterialMeshBundle {
        mesh: meshes.add(mesh.clone()),
        material: materials.add(Color::GREEN.into()),
        ..default()
    },
    render_layer_2,
));

commands.spawn((
    Camera3dBundle {
        camera: Camera {
            order: camera_order_2,
            target: RenderTarget::Image(image_handle),
            ..default()
        },
        camera_3d: Camera3d {
            clear_color: ClearColorConfig::Custom(Color::BLACK),
            ..default()
        },
        ..default()
    },
    UiCameraConfig { show_ui: false },
    render_layer_2,
    PanOrbitCamera {
        zoom_sensitivity: 0.5,
        reversed_zoom: false,
        ..default()
    },
));
...

pub fn generate_render_target_image(
    images: &mut ResMut<Assets<Image>>,
    width: u32,
    height: u32,
) -> Handle<Image> {
    let render_target_size = Extent3d {
        width,
        height,
        ..default()
    };

    // This is the texture that will be rendered to.
    let mut render_target_image = Image {
        texture_descriptor: TextureDescriptor {
            label: None,
            size: render_target_size,
            dimension: TextureDimension::D2,
            format: TextureFormat::Bgra8UnormSrgb,
            mip_level_count: 1,
            sample_count: 1,
            usage: TextureUsages::TEXTURE_BINDING
                | TextureUsages::COPY_DST
                | TextureUsages::RENDER_ATTACHMENT,
            view_formats: &[],
        },
        ..default()
    };

    // fill image.data with zeroes
    render_target_image.resize(render_target_size);

    images.add(render_target_image)
}
Plonq commented

Hi @VitoKingg, thanks for reporting the issue. It does currently assume the camera is rendering to a window, so it can support multiple windows and/or viewports, and differentiate the input events between them.

Come to think of it though, most people won't need that functionality so I might put it behind a config option, so the default behaviour doesn't care what the render target is. This should solve your problem.

I will look into this next week as I'm away this weekend.

Plonq commented

@VitoKingg I'd like to know a bit more about your use case. Are you showing the image you're rendering to in the UI? If so, that sounds very similar to having multiple viewports (see multiple_viewports example). Is there a reason using viewports won't work for you?

I think adding rendering to image is a bit awkward, for one main reason - there's no way to know how the image is being displayed, and therefore no easy way to map mouse input to image coordinates. And if that's not possible, and therefore the main window is used for input, the controls will feel completely wrong. As an example, right now panning at default sensitivity will move the camera the same 'amount' as the mouse moves, so it feels natural. But without being able to map mouse input to the rendered area, panning will either be super sensitive or not sensitive enough. And worse, inconsistent if the image size changes.

@Plonq Currently I'm using bevy_ui to do the app layout with the benefit of flexbox mechanism. However, there is no way to insert a 2D/3D bundle into a Bevy UI NodeBundle (the app will crash).

I find a way to insert an ImageBundle, which is a Bevy UI bundle, into a NodeBundle, and create an image handle (Handle<Image>), and assign this handle to both the UiImage of the ImageBundle and the target (RenderTarget::Image()) of the Camera3dBundle. I use this method as a hack to render 2D/3D bundles into the Bevy UI bundle.

The reason I've chosen not to use viewport is that I need to change the viewport's position and size on my own when I scale or resize the app window, and there is no need to do these work when I try to use the Bevy UI bundles with flexbox layout.

I know it's a little bit awkward to render 2D/3D bundles to an image texture, but it does save my time when dealing with those calculating works. ๐Ÿ˜‚๐Ÿ˜‚๐Ÿ˜‚

Many thanks for your reply~

Plonq commented

@VitoKingg thanks for the explanation, I can understand why you wouldn't want to use viewports. I will try to think of a way to add render-to-image support that works for that use case, but I want to avoid adding too much complexity, or requires too much configuration, so I can't promise anything. By your own admission, this use case is already a hack, and I don't want to modify this lib just to accommodate a hack.

Could you provide a minimal example app that does what you describe?

@Plonq here it is:

use bevy::{
    core_pipeline::clear_color::ClearColorConfig,
    input::mouse::MouseWheel,
    prelude::*,
    render::{
        camera::RenderTarget,
        render_resource::{
            Extent3d, TextureDescriptor, TextureDimension, TextureFormat, TextureUsages,
        },
        view::RenderLayers,
    },
    window::PresentMode,
};
use bevy_panorbit_camera::{PanOrbitCamera, PanOrbitCameraPlugin};

#[derive(Component, Debug)]
struct CameraMarker;

pub fn main() {
    App::new()
        .add_plugins(DefaultPlugins.set(WindowPlugin {
            primary_window: Some(Window {
                present_mode: PresentMode::AutoNoVsync,
                position: WindowPosition::Centered(MonitorSelection::Current),
                title: "Hello, world!".to_string(),
                resize_constraints: WindowResizeConstraints {
                    min_width: 1440.,
                    min_height: 900.,
                    ..default()
                },
                ..default()
            }),
            ..default()
        }))
        .add_plugin(PanOrbitCameraPlugin)
        .insert_resource(AmbientLight {
            color: Color::WHITE,
            brightness: 1.0,
        })
        .add_startup_system(setup)
        .add_system(mouse_system)
        .run();
}

fn setup(
    mut commands: Commands,
    mut images: ResMut<Assets<Image>>,
    mut meshes: ResMut<Assets<Mesh>>,
    mut materials: ResMut<Assets<StandardMaterial>>,
) {
    let image_handle = generate_image(&mut images, 800, 800);

    let layer_one = RenderLayers::layer(1);
    let order_one = 1;

    let layer_two = RenderLayers::layer(2);
    let order_two = 2;

    commands.spawn((
        Camera2dBundle {
            camera: Camera {
                order: order_one,
                ..default()
            },
            ..default()
        },
        layer_one,
    ));
    commands
        .spawn((
            NodeBundle {
                style: Style {
                    min_size: Size::all(Val::Percent(100.0)),
                    display: Display::Flex,
                    flex_direction: FlexDirection::Row,
                    flex_grow: 1.0,
                    ..default()
                },
                background_color: BackgroundColor(Color::AZURE),
                ..default()
            },
            layer_one,
        ))
        .with_children(|parent| {
            parent.spawn((
                NodeBundle {
                    style: Style {
                        size: Size::new(Val::Px(200.0), Val::Percent(100.0)),
                        ..default()
                    },
                    background_color: BackgroundColor(Color::TEAL),
                    ..default()
                },
                layer_one,
            ));

            parent
                .spawn((
                    NodeBundle {
                        style: Style {
                            min_size: Size::new(Val::Px(500.0), Val::Percent(100.0)),
                            flex_direction: FlexDirection::Row,
                            align_items: AlignItems::Center,
                            flex_basis: Val::Px(500.0),
                            flex_grow: 1.0,
                            ..default()
                        },
                        background_color: BackgroundColor(Color::GOLD),
                        ..default()
                    },
                    layer_one,
                ))
                .with_children(|parent| {
                    parent.spawn((
                        ImageBundle {
                            style: Style {
                                size: Size::new(Val::Percent(100.0), Val::Percent(80.0)),
                                margin: UiRect::all(Val::Auto),
                                ..default()
                            },
                            image: UiImage::new(image_handle.clone()),
                            ..default()
                        },
                        layer_two,
                    ));
                });

            parent.spawn((
                NodeBundle {
                    style: Style {
                        size: Size::new(Val::Px(200.0), Val::Percent(100.0)),
                        ..default()
                    },
                    background_color: BackgroundColor(Color::MAROON),
                    ..default()
                },
                layer_one,
            ));
        });

    commands.spawn((
        PbrBundle {
            mesh: meshes.add(Mesh::from(shape::Cube { size: 1.0 })),
            material: materials.add(Color::rgb(0.8, 0.7, 0.6).into()),
            transform: Transform::from_xyz(0.0, 0.5, 0.0),
            ..default()
        },
        layer_two,
    ));

    commands.spawn((
        Camera3dBundle {
            camera: Camera {
                order: order_two,
                target: RenderTarget::Image(image_handle),
                ..default()
            },
            camera_3d: Camera3d {
                clear_color: ClearColorConfig::Custom(Color::BLACK),
                ..default()
            },
            transform: Transform::from_translation(Vec3::new(0.0, 0.0, 10.0)),
            ..default()
        },
        // !IMPORTANT: This prevents double rendering of UI.
        UiCameraConfig { show_ui: false },
        // PanOrbitCamera::default(),
        CameraMarker,
        layer_two,
    ));
}

fn generate_image(images: &mut ResMut<Assets<Image>>, width: u32, height: u32) -> Handle<Image> {
    let render_target_size = Extent3d {
        width,
        height,
        ..default()
    };

    // This is the texture that will be rendered to.
    let mut render_target_image = Image {
        texture_descriptor: TextureDescriptor {
            label: None,
            size: render_target_size,
            dimension: TextureDimension::D2,
            format: TextureFormat::Bgra8UnormSrgb,
            mip_level_count: 1,
            sample_count: 1,
            usage: TextureUsages::TEXTURE_BINDING
                | TextureUsages::COPY_DST
                | TextureUsages::RENDER_ATTACHMENT,
            view_formats: &[],
        },
        ..default()
    };

    // fill image.data with zeroes
    render_target_image.resize(render_target_size);

    images.add(render_target_image)
}

fn mouse_system(
    mut mouse_wheel_events: EventReader<MouseWheel>,
    mut camera_marker: Query<&mut Transform, With<CameraMarker>>,
) {
    if let Ok(mut transform) = camera_marker.get_single_mut() {
        for event in mouse_wheel_events.iter() {
            info!("{:?}", event);
            transform.translation.z -= event.y * 1.0;
        }
    }
}
Plonq commented

@VitoKingg thanks for that.

I've tried out a few things, and I'm not happy with any of them. I don't think it makes sense to add render to image support to this library.

My reasoning goes like this. PanOrbitCamera scales mouse input based on the viewport of the camera. This scaling means that moving the mouse one full viewport width will rotate 360 degrees. When panning, the focus point will move by the same amount as the mouse, visually, so it feels like you're 'grabbing' the object. This is true no matter what size the viewport is, or whether it fills the window or not.
Now, since it's impossible to determine the size of the image in the case of rendering to an image, this scaling is not possible. So, say I add a config option single_window_mode, which when true, always uses the primary window as the source of input (for mouse events). This sort of solves the issue - the camera will be controllable. However, the scaling will be using the window, not the image, and so will be wrong, especially if the image is much smaller (or larger) than the window. I could then add a config option to specify the rect where the image is rendered, but this would need manually updating every frame if the image changes. And if you get that far, then why not just render to a viewport and manually update the viewport size?

So yes, I could do some/all of the above, but it's either going to be only a partial solution, or still need manual work from you anyway. Therefore, I believe it doesn't make sense to modify the lib to accomodate this hack.

If you've got any alternative solutions that don't have the above issues, I'm all ears. I've got the partial solution described above in a branch if you're interested (you can of course set that branch in your Cargo.toml).

@Plonq Excellent!

I'm considering using egui instead of bevy_ui since my app is becoming more complicated than I've expected. Therefore, I need to render those things to a viewport and manually update it's size and scale.

Plonq commented

Hi @VitoKingg, I've come up with an alternative solution thanks to a feature request which may solve this issue. I know this is likely no longer relevant but wanted to let you know regardless. Check out #33 for details.

@Plonq Thank you for taking the time.