Samples for 2D and 3D Direct Linear Transformation (DLT) Couldn't easily find library for DLT (Abdel-Aziz & Karara 1971 Proc. Symposium on Close-Range Photogrammetry, Urbana, Illinois, p. 1-18.) for motion capture by googling and decided to write my own code. Took the required math predominantly from http://kwon3d.com/theory/dlt/dlt.html and used "JAMA : A Java Matrix Package" for matrix algebra (http://math.nist.gov/javanumerics/jama/). Didn't quite get all that I needed to understand from kwon3d.com and had to do some figuring out myself. Consequently, only the basic DLT is implemented (i.e. coefficients 1 - 11 for the 3D) and lens distortions are not accounted for. The folder octaveTest contains an octave script testDaesik, which combines Tsai's camera calibration (implemented by Daesik http://www.daesik80.com/matlabfns/matlabfns.htm), and the calibration seems to work. Also, an unintended consequence of the lens calibration is to make it very obvious that extrapolating outside of the calibrated volume would lead into massive inaccuracies. I'm satisfied with Tsai's camera calibration, and intend to implement that in java. More sophisticated camera calibration is available e.g. in the OpenCV library (opencv.org), but the implementation is in C++ instead of Java. Added gradle build script. Testing in linux (in the folder of this file [update 0.0.2 to correspond to build.gradle version...]): export LD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda-12.4/lib64:$LD_LIBRARY_PATH gradle jar && java -jar build/libs/Direct-Linear-Transformation-0.0.2.jar ONGOING TESTING: JOCL to leverage the GPU in solving 3D coordinates based on multiple camera view coordinates. This reduces to requiring a 3x3 matrix inverse, which can be solved with adjugate and determinant http://www.jocl.org/ https://en.wikipedia.org/wiki/Adjugate_matrix JOCL requires OpenCL to be in the Path (e.g. on my linux machine) export LD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda-12.4/lib64:$LD_LIBRARY_PATH NEXT TODO Test Newton's method to get undistorted points from distorted (points digitized on the original image) coordinates. This would be needed in order to not to have to use the interpolated undistorted image for analysis. In short, 3D motion capture with DLT comprises two steps: 1) Plane(2D)/volume(3D) calibration using a known calibration object and digitizing the control points from each camera. The calibration object and digitized points are then used to obtain the DLT coefficients for each camera. 2) Digitizing unknown point(s) within the calibrated plane (or volume in 3D analysis, points from a minimum of two cameras) and applying the DLT to obtain the global coordinates. Typically camera calibration is also included to improve accuracy (e.g. Tsai's camera calibration). The DLT is then calculated on the calibrated undistorted coordinates.