raymanfx/eye-rs

Add `image` crate support

Closed this issue · 6 comments

I feel implementing some integration with the image crate would be a good move.
I've just been trying out eye and wanted to save a captured frame to an image file which leads to something like this:

image::ImageBuffer::<image::Rgb<u8>, &[u8]>::from_raw(stream_desc.width, stream_desc.height, frame.as_bytes()).unwrap().save("image.png")?;

It could be as simple as having a to_image_buffer method on the Frame struct. Or more extreme replacing Frame entirely and only returning ImageBuffer(or something similar I'm not well versed with it yet). You could even use image's pixel traits instead of PixelFormat?

I thought about this before and generally feel the same way. The problem is that the image crate is too restrictive and cannot handle arbitrary pixel formats all that well. For example, YUV formats such as YUYV which is very common among consumer grade hardware is entirely unsupported. The pixel traits of the image crate are thus insufficient.

@raymanfx maybe you can help with the conversion of the frame to the image where I can save it. I'm having some problems with converting.
I'm trying to specify specs

use eye_hal::{ stream::Descriptor as StreamDescriptor }
let stream_params = StreamDescriptor {
        width: 320,
        height: 240,
        pixfmt: PixelFormat::Rgb(24),
        interval: std::time::Duration::from_nanos(33333335),
    };

Then taking the frame

let mut stream = camera.start_stream(&stream_params)?;
   let frame = stream
       .next()
       .expect("Stream is dead")
       .expect("Failed to capture frame"); 

use image::{ImageBuffer, Rgb};
let res =
       ImageBuffer::<Rgb<u8>, &[u8]>::from_raw(stream_params.width, stream_params.height, &frame)
           .unwrap()
           .save(file_path);

But when I do save the image it has multiple horizontal lines, and it seems that the PixelFormat is not applied.

@hadhoryth I assume you are on Linux, thus using the v4l platform backend, is that correct?

The backend invokes set_format() here: https://github.com/raymanfx/eye-rs/blob/master/eye-hal/src/platform/v4l2/device.rs#L196. This boils down to the implementation in v4l-rs here: https://github.com/raymanfx/libv4l-rs/blob/master/src/video/macros.rs#L150.

As you can see, the implementation negotiates the pixelformat with the Linux kernel (because that's what v4l2 is designed to do). The system call will return the actual format in use. The problem is that we don't propagate that back up the callchain right now. v4l-rs already does this - we just have to make use of the returned value in eye-rs.

Clearly, this is an API issue and needs to be fixed. Do you want to create a separate issue for tracking that item? I will try to have a look at it this week.

Just an example for the version eye_hal 0.2.0, in case someone find this issue

// [dependencies]
// eye-hal = "0.2.0"
// image = "0.25.1"

use eye_hal::format::PixelFormat;
use eye_hal::traits::{Context, Device, Stream};
use eye_hal::PlatformContext;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create a context
    let ctx = PlatformContext::default();

    // Query for available devices.
    let devices = ctx.devices()?;

    // First, we need a capture device to read images from. For this example, let's just choose
    // whatever device is first in the list.
    let dev = ctx.open_device(&devices[0].uri)?;

    // Query for available streams and just choose the first one.
    let streams = dev.streams()?;
    let stream_desc = streams[0].clone();
    println!("Stream: {:?}", stream_desc);

    // Since we want to capture images, we need to access the native image stream of the device.
    // The backend will internally select a suitable implementation for the platform stream. On
    // Linux for example, most devices support memory-mapped buffers.
    let mut stream = dev.start_stream(&stream_desc)?;

    // Here we create a loop and just capture images as long as the device produces them. Normally,
    // this loop will run forever unless we unplug the camera or exit the program.
    loop {
        let frame = stream
            .next()
            .expect("Stream is dead")
            .expect("Failed to capture frame");
        match stream_desc.pixfmt {
            PixelFormat::Custom(fmt) if fmt == "YUYV" => {
                let yuyv_frame: Vec<u8> = frame.chunks_exact(4).fold(vec![], |mut acc, v| {
                    // convert form YUYV to RGB
                    let [y, u, _, v]: [u8; 4] = std::convert::TryFrom::try_from(v).unwrap();
                    let y = y as f32;
                    let u = u as f32;
                    let v = v as f32;

                    let b = 1.164 * (y - 16.) + 2.018 * (u - 128.);

                    let g = 1.164 * (y - 16.) - 0.813 * (v - 128.) - 0.391 * (u - 128.);

                    let r = 1.164 * (y - 16.) + 1.596 * (v - 128.);
                    let r = r as u8;
                    let g = g as u8;
                    let b = b as u8;

                    acc.push(r);
                    acc.push(g);
                    acc.push(b);
                    acc.push(r);
                    acc.push(g);
                    acc.push(b);
                    acc
                });
                image::ImageBuffer::<image::Rgb<u8>, &[u8]>::from_raw(
                    stream_desc.width,
                    stream_desc.height,
                    &yuyv_frame,
                )
                .ok_or("failed to convert bytes to an image")?
                .save("image.png")?;
            }
            _ => unimplemented!(),
        }
        break Ok(());
    }
}

Thanks! Would you mind submitting a PR and adding this as an example?

Example has been merged in 095c983.