kacpertopol/cam_board

Handle input from streaming webcam?

Closed this issue · 8 comments

Hi @kacpertopol , I discovered this very cool project after you posted an update on reddit. I dropped you a PM there regarding my attempts to get cam_board to use an alternate camera as its source, in my case an external wireless webcam. I also recorded some details under this pull request comment.

In short, I have a number of wireless webcams configured as scene sources in OBS. I then use the v4l2sink plugin to stream from OBS out to a v4l2loopback device, which video collab tools like zoom, slack and teams can use as their video sources. Using OBS as a camera switch, the scheme works well.

I have one camera (wbcam) mounted on the ceiling oriented at a traditional whiteboard mounted on the wall. I'm trying to cam_board-ify this camera.

So far I have been successful at standing up another v4l2loopback device (video5) and using ffmpeg to capture and copy the wbcam stream to video5:

$ ffmpeg -i http://<wbcam>/video/mjpeg -codec copy -f v4l2 /dev/video5

Then

$ cam_board --camera 5

will start the script on the virtual video device. However the aruco symbols aren't recognized. I think it's because the camera-to-board geometry is different. Through the magic of ascii art I will try to explain...

Traditional (laptop-to-desk) cam_board geometry:

        /c  <--- laptop camera tilted down
      /
    /
  /______       _______  <--- "whiteboard" surface
---------------------------------

This geometry makes the laptop camera see the whiteboard surface as a trapezoid with a wide bottom and narrow top. I suspect the logic that captures the aruco symbols and warps the whiteboard accordingly takes this into account...

Ceiling-to-wall cam_board geometry:

--------------------------------
        c  <--- ceiling webcam |
                               |
                             | |
                             | |   <--- whiteboard mounted
                             | |              to wall
                             | |
                               |
_______________________________|

In this configuration the camera sees the whiteboard surface as a trapezoid with a wide top and a narrow bottom. I suspect the logic to capture the aruco symbols isn't primed for this geometry and consequently it fails to capture the symbols.

My first questions: is the analysis directionally correct? If so is this something we could add to the script possibly as a command line switch, to prime the logic for the proper geometry strategy? Default to table top of course?

I'm happy to assist in whatever way(s) most helpful. I am an opencv noob.

Hi, thanks for your post. My first impression is that the markers might be to small or oriented (rotated) differently then in to_print/...pdf (or to_print/...svg). I could be wrong - could you try and capture a couple of images (using cheese for example) from the ceiling camera and post them here? I will try to load them in python and try to replicate the problem.

Thanks for the reply. Here's a screenshot of vlc reading directly from /dev/video5, being fed by ffmpeg as described above.

Screenshot_20210104_154302

The whiteboard dimensions are 22.875 inches high and 34.875 inches wide. I've printed the symbols as 2.69 inches square and positioned them as in the letter samples.

$ cam_board --camera 5

produces this:

Screenshot_20210104_155023

It seems like the aspect ratio is different. Is that correct?

Is it having a problem because the upper right corner of the board is clipped?

Huh. I used the webcam's digital zoom capability to step out a few steps such that the upper corner of the board is no longer clipped:

Screenshot_20210104_161557

and immediately the cam_board window shifted to this:

Screenshot_20210104_161639

So that's something...

Hi @rburcham . I took your image and used it instead of the camera capture in the script. Unfortunately I'm not replicating your problem. The result I got is below:

image

Are you using the latest commit?

I just pulled and got an updated README.

Well, that's very encouraging. Was that with the clipped image or the unclipped image?

Can you give pointers on how you changed the source to the image for this test? Would it make sense to add that capability as a command line feature to facilitate testing? It would ensure that the user had their opencv/aruco/numpy right and sanity check aruco acquisition.

One thing that is curious to me is that the video changes so radically when cam_board is directed at video5, as compared to vlc or some other video player looking at video5, i.e. the image is smaller, seems to have a different aspect ratio, and appears to have those horizontal line video artifacts.

FWIW I'm gentoo, numpy-1.19.4, opencv-4.5.1.

Ok,

$ diff cam_board cam_board.dist
104,105c104
< #            ret, frame = cap.read()
<             frame = cv2.imread('/tmp/clipped_wb.png',1)
---
>             ret, frame = cap.read()

Got me the same results you shared above.

So it's got to be something about opencv reading from the the loopback video device as a video source?

I've recompiled opencv with ffmpeg and gstreamer support, and now cv2.VideoCapture can open webcam URL streams directly. So no need for the v4l2loopback device and a separate ffmpeg process (which opencv didn't like).

I'll mess around and make the --camera switch conditionally accept a webcam URL in addition to a local video device, and throw you a PR.

I think this will do it

#8