Analyze Keijiro's implementation for Azure
Closed this issue · 2 comments
chikashimiyama commented
- He is showing color and depth in 2D which is a big help for the analysis.
- His goal of the data transfer from Kinect Azure is clearly RenderTextures.
- He uses actually C++ native plugin k4a and deptihengine_2_0. So this project is actually windows only.
- The core functionality of data transfer is in CaptureThread function, as the name suggests he uses a second thread to get data and put it in a queue
- LockLastFrame / ReleaseLastFrame functions pass the grabbed data to the next class
- PointCloud Baker copy that to normal texture (_temporaries) and probably copy the context to two RenderTextures
var prevRT = RenderTexture.active; GraphicsExtensions.SetRenderTarget(_colorTexture, _positionTexture); Graphics.Blit(null, _material, 0); RenderTexture.active = prevRT;
chikashimiyama commented
Overall, keijiro's implementation is full of hack and slash. This doesn't mean he is doing something wrong but Kinect Azure API is not mature enough and doesn't let the developer do easy things easily.
I assume the API may change significantly towards the release version so I think it doesn't worth implementing it in a proper way; just grab his code and put it in the SoundVision with enough isolation. so that we can discard/rebuild the entire module later if necessary