BreeeZe/rpos

Feature Request: Stereo Cameras through ONVIF viewer

Closed this issue · 13 comments

Hello,
I'm using a Raspberry Pi Compute Module 4 IO board and I downloaded the ONVIF program. I can view the video using the ONVIF Device Manager on a separate computer.
I know the stereo cameras work because I can see video using raspivid -3d sbs and see both cameras on one feed. Wondering if anyone knows of a way to set it up the Compute Module 4 for Stereo cameras side-by-side when viewed through the ONVIF device manager?

I worked on stereoscopic video projects between 1997 and 2002.

You could take a look at the rpos folder with the gstreamer python script in.
It uses a gstreamer wrapper around raspivid to get video and then encode it.
There may be flags in there to trigger sbs video mode.

Last year I bought a load of Sony eyeToy cameras from ebay really cheap. I did not want them for the image clarity, but to have cameras with the same lens and optics and I set up a stereoscopic trial in my office. That opened /dev/video0 and /dev/video1 and then merged the two images in the gstreamer pipeline (or in the ffmpeg filter-complex pipeline). Worked OK on the Pi but images had to be 640x480 or less due to USB bus limitations.

it would be interesting to get this working, and to do an anaglyph too as and to see how well it works with Google Cardboard stereoscopic video goggles.

I looked through the rpos folder. I found /rpos/python/gst-rtsp-launch.py but didn't see a gstreamer python script. Do you recall the directory and filename of the python script?
Do you have more information on how you merged video0 and video1 into the gstreamer pipeline?
Using the Google goggles would be interesting. Just had a thought. Maybe it could pan/tilt with head movement too.

Using Google Goggles to control pan and tilt would re-create exactly what I built back in 1997 where we used VR Headsets with Head Trackers. Gosh, 24 years ago. That would be interesting

The Python script I was referring to is ../rpos/python/gst-rtsp-launch.py
The Python script generates a string which is passed into gst-launch.
This makes a Gstreamer pipeline. The output of that pipeline is then fed into the GStreamer RTSP server.

So you'd need to modify the way the string is built to that it opens /dev/video0 and /dev/video1 and places the two images side by side (or top and bottom, or interleaved or color converted and blended for anaglyph)

I was looking at rpicamsrc in gst-rtsp-launch.py but didn't see a way to setup stereo cameras. I added camera-number=1 to rpicamsrc to see if this would switch to camera 1, but it didn't. I found that in /lib/camera.js v4l2rtspserver is used with device /dev/video0 hardcoded. I changed all the video0's to video1's and re-ran node rpos.js. video1 was then displayed.
Am I headed in the right direction for stereo cameras? Would I have to see if v4l2rtspserver somehow supports stereo cameras?

HI Jim
There is no stereoscopic support. What I mean was you (or someone) will need to modify the Python scripts to read from 2 cameras and merge the video and make these software changes for the gstreamer pipeline in the Python file.

It is not as simple as changing one parameter.

Are you a programmer? or more of an Admin / Advanced User type person?

I program using C on embedded platforms. So have a bit of a learning curve for Ptyhon, CPP and and js. There is a simple python script for setting up stereo cameras. I'll keep searching to find the Python file that needs to updated.

Hi Jim
You don't really need to know any Python.

The file is /rpos/python/gst-rtsp-launch.py
This builds an ASCII String which is passed into a program called Gstreamer

		else: # USB Camera
			# Ignore most of the parameters
			log.info("USB camera ignored most of the parameters")
			launch_str = '( v4l2src device='+self.device+' brightness='+str(self.brightness)+' contrast='+str(self.contrast)+' saturation='+str(self.saturation)
			launch_str = launch_str + ' ! image/jpeg,width='+str(self.width)+',height='+str(self.height)+',framerate='+str(self.fps)+'/1 ! jpegdec ! clockoverlay ! omxh264enc target-bitrate='+str(self.bitrate)+' control-rate=variable ! video/x-h264,profile=baseline ! h264parse ! rtph264pay name=pay0 pt=96 )'

Just normal string concatenation.

It makes a gstreamer pipeline which is then passed as a parameter to the gst-launch software.

You need to make a new ASCII string to pass into gst-launch that can open 2 cameras (2 instances of v4l2src), place them side by side using some other gstreamer commands.

So all the work is in the gstreamer pipeline

I'm using the Pi 4 Compute module that supports 2 Pi cameras (non-USB). Is the v4l2src easier to use than using the "picam" rpicamsrc?
I did notice that in the rpicamsrc section adding camera-number=1 and changing to camera-number=0 will switch between the 2 cameras.

I did a test today with 2 x v4l2 sources and merging them in software.
You can test gstreamer pipelines from the command line
I used

gst-launch-1.0 videomixer name=m sink_0::xpos=0 sink_1::xpos=320  ! autovideosink sync=false  \
         v4l2src device=/dev/video0  ! videoconvert ! videorate ! video/x-raw,width=320,framerate=5/1 ! clockoverlay ! m.    \
        v4l2src device=/dev/video2 !  videoconvert ! videorate ! video/x-raw,width=320,framerate=5/1 ! clockoverlay ! m.

This opened two v4l2 devices (USB cameras) at 320x240 and put 5fps video into the video mixer that placed the video side by side.

In this example I pass the video into autovideosink to display on my Pi desktop, but that would pass into the rest of the gstreamer pipeline to h264 encode, put into RTP payloads and send to the RTSP server.

On my Pi it really struggled with 2 USB cameras, even at 320x240.

As for your changes you may be able to open both cameras as V4L2 devices and mix like I have. Or you may be able to pass flags to raspicamsrc.
There is a third option. You can get Gstreamer to take video from stdin (eg h264 compressed video) and then run raspivid (spawned in the Python) and pass the raspivid output into the gst-launch pipeline.

A few different ways but as I don't have any Compute Modules I think that is all I can offer.

The picam seems to stream better that the usbcam when viewed with the ONVIF Device Manager so I've also been trying to modify the picam stream.
When I start node rpos.js the StreamServer is initialized and the RTSP server is running. But when i try to connect with the ONVIF Device Manager,
ffmeg indicates an error and closes. Not sure if this matters, but the video is usually 1280 x 720 and if I just output one video stream with width=640,
when I view it using the ONVIF Device Manager it just stretches the video to fit the screen instead of showing just half a screen. Here's what I tried, any thoughts?

picam

	launch_str = 	'( videomixer name=m sink_0::xpos=0 sink_1::xpos=640 ! h264parse ! rtph264pay name=pay0 pt=96' + \
                     ' rpicamsrc camera-number = 0 preview=false bitrate='+str(self.bitrate)+' keyframe-interval='+str(self.h264_i_frame_period) + \
                     ' ! video/x-h264, framerate='+str(self.fps)+'/1, width=640, height=720 ! m.' + \
                     ' rpicamsrc camera-number=1 preview=false bitrate='+str(self.bitrate)+' keyframe-interval='+str(self.h264_i_frame_period) + \
                     ' ! video/x-h264, framerate='+str(self.fps)+'/1, width=640, height=720 ! m.)'

usbcam

		launch_str = '( videomixer name=m sink_0::xpos 0 sink1::xpos 640 ! video/x-h264,profile=baseline ! h264parse ! rtph264pay name=pay0 pt=96  v4l2src device=/dev/video0 ! image/jpeg,width=640, height='+str(self.height)+',framerate='+str(self.fps)+'/1 ! jpegdec ! clockoverlay ! omxh264enc target-bitrate='+str(self.bitrate)+' control-rate=variable ! m.' + \
                        'v4l2src device=/dev/video1 ! image/jpeg,width=640 ,height='+str(self.height)+',framerate='+str(self.fps)+'/1 ! jpegdec ! clockoverlay ! omxh264enc target-bitrate='+str(self.bitrate)+' control-rate=variable ! m.)'

I noticed raspivid -3d sbs will generate side-by-side video and found examples using raspivid to output to stdout, but couldn't get gst-launch to read it. Tried gst-launch-1.0 fdsrc.
Also tried glstereomix bit get a warning no element glupload so couldn't figure out how to get that working.

Was able to put video side-by-side using v4l2src and viewing with the ONVIF Device Manager. Now will see if I can get it to work with rpicamsrc since the latency is better. videomixer is deprecated in favor of compositor so switch to compositor. https://gstreamer.freedesktop.org/documentation/videomixer/index.html?gi-language=c

	else: # USB Camera
		# Ignore most of the parameters
		log.info("USB camera ignored most of the parameters")		
		launch_str = '( compositor name=m sink_0::xpos=0 sink_1::xpos=640 ! omxh264enc target-bitrate=10000000 control-rate=variable ! video/x-h264,profile=baseline ! h264parse ! rtph264pay name=pay0 pt=96 '
		launch_str = launch_str +' v4l2src device=/dev/video0 ! image/jpeg,width=640,height=720,framerate=30/1 ! jpegdec ! m. '
		launch_str = launch_str +' v4l2src device=/dev/video1 ! image/jpeg,width=640,height=720,framerate=30/1 ! jpegdec ! m.)'

Just got the rpicamsrc working. Had to put it side-by-side using x-raw, then convert to h.264.
I stripped out of the camera settings to make it easier to view.

		launch_str = '( compositor name=m sink_0::xpos=0 sink_1::xpos=640 ! omxh264enc target-bitrate=10000000 control-rate=variable ! video/x-h264,profile=baseline ! h264parse ! rtph264pay name=pay0 pt=96 '
		launch_str = launch_str + ' rpicamsrc camera-number=0  video/x-raw, framerate=30/1, width=640, height=720 ! m. '
		launch_str = launch_str + ' rpicamsrc camera-number=1  video/x-raw, framerate=30/1, width=640, height=720 ! m. )'

That is excellent news.
Well done.