Tool that can transform a 2D image or video into 3D Meshes with Depth.
Targeting all device include mobiles
Encoder
Barebone MobileNetV2
- In order to run smoothly on mobile device, I have decrease the learnable parameter into bare minimum requirement, half the parameters if compare with official MobileNetV2.
Decoder
Reverse of Encoder + Unet structure
- Unet is the key for auto encoder to actually learn something, without it, nothing work
Architecture
Data source
1.DIML https://dimlrgbd.github.io/
2.TAU http://www.cs.toronto.edu/~harel/TAUAgent/home.html
3. Some other street, human pose depth images
- Use tensorflow-onnx to export onnx file
- Implement Unity Barracuda to read onnx file, and asynchronous run it on corountine. (No available with threading)
- Read generated depth texture and store the depth position, pixl value into ComputeBuffer
- Pass down the ComputeBuffer to Vert/Geometry/Frag Shader, and draw meshes there, or else it look just like point cloud.