MissouriMRDT/BaseStation_Software

Camera Feeds

Closed this issue · 0 comments

I initiated some dialogue with an industry expert on how to display video feeds in WPF as natively as possible and he was willing to share some of his experience with implementing it, specifically with IP cameras. I'm copying the conversation here to compile the information we have on the matter and hopefully come up with a solution that meets our needs.

His reply back to me:

I'll explain a couple ways we achieved smooth video in WPF in hopes this sets you on the right path.

The real issue w/ doing video + UI in WPF is "air-space" problems: if done wrong, the UI can't composite elements over the video (pop-up menus, video controls, text, etc).

My first successful attempt at solving air-space issues was to write a custom DirectShow source filter (this is what you saw in my blog post, and it was before the d3dimage element existed). We registered the source filter w/ DirectShow using a custom URL schema (something like pelco:// if I remember correctly). Then, the C# app would create a MediaSource (something like that) element and set the source URL to "pelco://:/?". This would cause DirectShow to load our custom source filter and we could send whatever video we wanted down the video pipeline for rendering. Since the MediaSource element have full control, there were no compositing (air-space) issues.

The final (successful) method we used (and shipped a product with) used the d3dimage element in WPF/C#. I don't have the code, but I've been in touch w/ our old UI guru and another teammate who did the actual rendering implementation. Here's as much detail as everyone could remember (from 3 years ago):

  1. The video was composited on a d3dimage element in C#.

  2. The C# app would register for the "rendering" event and call down to our unmanaged (C++) video pipeline to get the latest video frame. See here

  3. Our unmanaged video renderer just kept a front/back buffer pair... we would just copy the back buffer to the d3dsurface of the app when called. We were constantly swapping front and back buffers w/ the latest video frames.

    a. I think the C# app was what created the d3dsurface that our C++ video renderer blitted the video to. Regardless of who created it, the C# app was somehow setting or copying that surface on the d3dimage element.

    b. If memory serves, WPF (actually, the Windows UI compositor) would fire the render event at a 40ms interval (25fps). Even w/ video at 30fps, it still looked really good (the only time it was noticeable was w/ video of cars on a freeway... the smooth motion of the cars was a little off if you looked closely).

Related Reading: D3DImage