LEGO VISION COMMAND USB Camera with EV3 and ev3dev
Nikolay-Zagrebin opened this issue · 11 comments
ev3dev version: 4.14.117-ev3dev-2.3.5-ev3
ev3dev-lang-python version: INSERT ALL VERSIONS GIVEN BY dpkg-query -l {python3,micropython}-ev3dev* HERE
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-===================-==============-==============-===========================================
ii python3-ev3dev 1.2.0 all Python language bindings for ev3dev
dpkg-query: no packages found matching micropython-ev3dev
Hi there,
Thanks to the ev3dev development team, especially David Lechner.
Inspired by his experiment on connecting the LEGO Movie Maker USB camera to EV3 and ev3dev, repeated with LEGO VISION COMMAND USB Camera everything works!
Next, there is an idea to experiment with machine vision and neural networks, I decided to use the pygame library, installed it from the repository, checked that version 1.9.1... was downloaded as part of the installed packages.
python-pygame/oldstable,now 1.9.1release+dfsg-10+b2 armel [installed]
Just with this version, pygame supports working with the camera, I'm trying to repeat the example of working with the camera, described here
Connection, initialization everything goes perfectly.
brickrun -r -- python
import pygame
import pygame.camera
from pygame.locals import *
pygame.init()
pygame.camera.init()
cam = pygame.camera.Camera("/dev/video0",(640,480))
cam.start()
When I call cam.start (), I get an error
Traceback (most recent call last):
File "", line 1, in
SystemError: ioctl(VIDIOC_S_FMT) failure: no supported formats
I understand correctly that perhaps the camera is not supported by pygame?
Perhaps there are recommendations for solving the problem?
You might try a command line tool like fswebcam
first to figure out what resolutions are supported. Since the USB host port on the EV3 is USB 1.1 only, only a few modes work even if the camera can do other modes when connected with UBS 2.0.
Thank you very much. I installed the v4l2-ctl utility for a more detailed study of the supported formats by the camera.
robot@ev3dev:~$ v4l2-ctl -d /dev/video0 --list-formats
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'GRBG'
Name : 8-bit Bayer GRGR/BGBG
It became clear that the camera supports only one format VIDEO_NUM_FMT, and pygame by default calls ioctl(VIDIOC_S_FMT), it seems that this is the problem. It remains to understand how to change the pygame call.
Do you have any ideas?
No, I have never used pygame.
OK, thanks.
I'm still at a dead end with pygame.
Maybe I'll try cv2. How to install it correctly?
sudo apt-get install python-opencv
or
pip3 install opencv-python
Use the debian package.
With cv2 also does not work, error:
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
In general it became clear that modern image processing libraries do not support our old lady (LEGO VISION COMMAND) ))).
I studied our camera in more detail v4l2-ctl --all
one of the important parameters
.
.
Pixel Format : 'GRBG'
.
.
I studied the issue of getting a frame from the video buffer, and the following code was obtained
from fcntl import ioctl
import v4l2
import mmap
NUM_BUFFERS = 1
## 1. Initializing the device
fd = open('/dev/video0', 'rb+', buffering=0)
fmt = v4l2.v4l2_format()
fmt.type = v4l2.V4L2_BUF_TYPE_VIDEO_CAPTURE
ioctl(fd, v4l2.VIDIOC_G_FMT, fmt)
ioctl(fd, v4l2.VIDIOC_S_FMT, fmt)
## 2. Requesting a buffer
req = v4l2.v4l2_requestbuffers()
req.count = NUM_BUFFERS
req.type = v4l2.V4L2_BUF_TYPE_VIDEO_CAPTURE
req.memory = v4l2.V4L2_MEMORY_MMAP
ioctl(fd, v4l2.VIDIOC_REQBUFS, req)
## 3. Do the memory mapping
buffers = []
for x in range(req.count):
buf = v4l2.v4l2_buffer()
buf.type = v4l2.V4L2_BUF_TYPE_VIDEO_CAPTURE
buf.memory = v4l2.V4L2_MEMORY_MMAP
buf.index = x
ioctl(fd, v4l2.VIDIOC_QUERYBUF, buf)
buf.buffer = mmap.mmap(fd.fileno(), buf.length, mmap.MAP_SHARED, mmap.PROT_READ | mmap.PROT_WRITE, offset=buf.m.offset)
buffers.append(buf)
for buf in buffers:
ioctl(fd, v4l2.VIDIOC_QBUF, buf)
## 4. Tell the camera to start streaming
buf_type = v4l2.v4l2_buf_type(v4l2.V4L2_BUF_TYPE_VIDEO_CAPTURE)
ioctl(fd, v4l2.VIDIOC_STREAMON, buf_type)
## 5. Capture image
buf = buffers[0]
ioctl(fd, v4l2.VIDIOC_DQBUF, buf)
video_buffer = buffers[buf.index].buffer
data = video_buffer.read(buf.bytesused)
video_buffer.seek(0)
raw_data = open("frame.bin", "wb")
raw_data.write(data)
raw_data.close()
ioctl(fd, v4l2.VIDIOC_QBUF, buf)
## 6. Tell the camera to stop streaming
ioctl(fd, v4l2.VIDIOC_STREAMOFF, buf_type)
fd.close()
I am currently saving the raw data to a file, but I need to upload the image to OpenCV. Naturally, it is not possible to simply load them, because this pixel format does not support OpenCV.
Help me with advice on how to convert raw data in the 'GRBG' format, for example, to jpg? Or any other.
Guys, I found a solution, everything turned out to be simple!!!
Connecting numpy and cv2 libraries
import numpy as np
import cv2
Loading the frame data from the file
bayer8_image = np.fromfile('frame1.bin', dtype=np.uint8).reshape((292,356))
converting data to an image
image = cv2.cvtColor(bayer8_image, cv2.COLOR_BayerGR2RGB)
save to a jpg file
cv2.imwrite('test1.jpg', image)
That's it!!!
It remains to wrap everything in a presentable interface and go ahead to conquer the AI )))
I repeated the work of the guys from leJOS, with the output of the image from the camera to the EV3 display, details here
LEGO-VISION-COMMAND-USB-Camera-with-EV3-and-ev3dev
I have a plan to make an image output, for debugging, on a web server, it is still unclear to create a server on EV3 or output to an external one. Advise me how best, maybe someone has experience?
Example server: https://github.com/G33kDude/ev3dev-web-remote
OK thanks, I'll study it.
While using the flask library, it turned out very simple.
The problem is solved, you can close the topic. Implementation in my repository.