JetCam is an easy to use Python camera interface for NVIDIA Jetson.
-
Works with various USB and CSI cameras using Jetson's Accelerated GStreamer Plugins
-
Easily read images as
numpy
arrays withimage = camera.read()
-
Set the camera to
running = True
to attach callbacks to new frames
JetCam makes it easy to prototype AI projects in Python, especially within the Jupyter Lab programming environment installed in JetCard.
If you find an issue, please let us know!
git clone https://github.com/NVIDIA-AI-IOT/jetcam
cd jetcam
sudo python3 setup.py install
JetCam is tested against a system configured with the JetCard setup. Different system configurations may require additional steps.
Below we show some usage examples. You can find more in the notebooks.
Call CSICamera
to use a compatible CSI camera. capture_width
, capture_height
, and capture_fps
will control the capture shape and rate that images are aquired. width
and height
control the final output shape of the image as returned by the read
function.
from jetcam.csi_camera import CSICamera
camera = CSICamera(width=224, height=224, capture_width=1080, capture_height=720, capture_fps=30)
Call USBCamera
to use a compatbile USB camera. The same parameters as CSICamera
apply, along with a parameter capture_device
that indicates the device index. You can check the device index by calling ls /dev/video*
.
from jetcam.usb_camera import USBCamera
camera = USBCamera(capture_device=1)
Call RTSPCamera
to use a RTSP video stream. The same parameters as CSICamera
apply, along with a parameter capture_source
that indicates the full rtsp stream address.
from jetcam.rtsp_camera import RTSPCamera
camera = RTSPCamera(width=224, height=224, capture_width=640, capture_height=480, capture_source='rtsp://10.42.0.161:5540/ch0')
Call read()
to read the latest image as a numpy.ndarray
of data type np.uint8
and shape (224, 224, 3)
. The color format is BGR8
.
image = camera.read()
The read
function also updates the camera's internal value
attribute.
camera.read()
image = camera.value
You can also set the camera to running = True
, which will spawn a thread that acquires images from the camera. These will update the camera's value
attribute automatically. You can attach a callback to the value using the traitlets library. This will call the callback with the new camera value as well as the old camera value
camera.running = True
def callback(change):
new_image = change['new']
# do some processing...
camera.observe(callback, names='value')
These cameras work with the CSICamera
class. Try them out by following the example notebook.
Model | Infared | FOV | Resolution | Cost |
---|---|---|---|---|
Raspberry Pi Camera V2 | 62.2 | 3280x2464 | $25 | |
Raspberry Pi Camera V2 (NOIR) | x | 62.2 | 3280x2464 | $31 |
Arducam IMX219 CS lens mount | 3280x2464 | $65 | ||
Arducam IMX219 M12 lens mount | 3280x2464 | $60 | ||
LI-IMX219-MIPI-FF-NANO | 3280x2464 | $29 | ||
WaveShare IMX219-77 | 77 | 3280x2464 | $19 | |
WaveShare IMX219-77IR | x | 77 | 3280x2464 | $21 |
WaveShare IMX219-120 | 120 | 3280x2464 | $20 | |
WaveShare IMX219-160 | 160 | 3280x2464 | $23 | |
WaveShare IMX219-160IR | x | 160 | 3280x2464 | $25 |
WaveShare IMX219-200 | 200 | 3280x2464 | $27 |
These cameras work with the USBCamera
class. Try them out by following the example notebook.
Model | Infared | FOV | Resolution | Cost |
---|---|---|---|---|
Logitech C270 | 60 | 1280x720 | $18 |
Some example stream : grigory-lobkov/rtsp-camera-view#3
Android RTSP server app: https://play.google.com/store/apps/details?id=veg.mediacapture.sdk.test.server&hl=en