This package provides a plugin for using a Kinect v2 device in Flitter. It makes use of the freenect2 Python package, which requires you to install the libfreenect2 library. This is left as an exercise for the reader.
The plugin will scan for presence of a Kinect device before trying to connect to it and tries to gracefully deal with the device being unplugged while in use. Only a single attached device is supported.
The additional nodes provided by this plugin are:
This provides access to the raw frames from the Kinect as an image. In addition
to the standard attributes (size
, etc.), it supports the following:
output=
[ :color
| :depth
| :registered
| :combined
]
: Whether to output the raw frame from the color camera, the raw frame from the
depth camera, the registered color image or a combined image. The default is
:combined
.
flip_x=
[ true
| false
]
: Whether to flip the image horizontally. Default is false
.
flip_y=
[ true
| false
]
: Whether to flip the image vertically. Default is false
.
near=
DISTANCE
: The near time-of-flight clip sphere of the depth camera, in metres. Depths
smaller than this will be considered to be invalid. Default is 0.5
.
far=
DISTANCE
: The near time-of-flight clip sphere of the depth camera, in metres. Depths
larger than this will be considered to be invalid. Default is 4.5
.
near_value=
VALUE
: The output channel value to use for distances at near
. Default is 1
.
far_value=
VALUE
: The output channel value to use for distances at far
. Default is 0
.
invalid_value=
VALUE
: The value to use for the depth channel if the distance is nearer than near
or further than far
. Default is 0
.
In :depth
output mode, the result will be a 512x424 image with each of the
RGB channels set to the distance through that pixel and the A channel set to
1
. Distances in the range near
to far
will be mapped linearly to grey
values between near_value
and far_value
, with the value being
invalid_value
for distances outside of that distance range.
In :color
output mode, the result image will be the 1920x1080 color frame as
received from the Kinect visible light camera.
For :registered
or :combined
output, the color image will be cropped and
aligned to the undistorted depth camera's view. With :combined
, the A channel
will contain the depth value, as described above. The RGB channels will not
be premultiplied by this value (it's not a real alpha). With :registered
, the
A channel will be 1.
The !kinect
window node can be used multiple times in a view without problem.
Each will show data from the same device.
This provides access to the output of the depth camera as a live 3D surface. The surface is constructed from the camera's point of view with the camera at the origin and the Z axis pointing towards the camera - so the entire surface exists on the negative-Z side of the origin, with normals (/windings) on the camera side of the surface. The model units are in metres. Invalid depth values will translate to holes in the surface.
The node supports the following attributes:
average=
NFRAMES
: The depth camera output is pretty noisy. Set this to a number (greater than
- to average together the last NFRAMES. A value of
3
is pretty decent, but any higher will cause visible spacetime smearing of any moving objects. The default is1
, i.e., do no averaging.
tear=
DISTANCE
: Set to a difference in depth (in metres) at which parts of the surface will
be torn apart instead of joined. This is useful to differentiate near objects
from far ones. The default is 0
, which means to not tear the surface.
near=
DISTANCE
: A near Z-axis clip-plane, measured in (positive) metres from the camera.
Points closer than this will be considered invalid. Default is 0.5
.
far=
DISTANCE
: A far Z-axis clip-plane, measured in (positive) metres from the camera.
Points further than this will be considered invalid. Default is 4.5
.
The surface has UV coordinates matching the :registered
color output of the
camera (as described above) and therefore the color camera output can be
texture mapped onto the surface.