/zhang

Camera calibration using Zhang's algorithm and CALTag

Primary LanguageMATLAB

Zhang instructions for use:
---------------------------


0) Requirements:

- MATLAB (tested with R2010a) with Optimisation toolbox
- CALTag (https://github.com/brada/caltag)



1) Capture images:

You need about 15-20 images for a reasonable calibration. See 
http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html
for an example to the type of image set you need to capture. Bear the 
following in mind in order to maximise accuracy:
- use a tripod, or a fast shutterspeed
- calibration grid must be flat
- either keep the camera stationary and move the calibration grid, or keep
  grid stationary and move the camera. Don't bump them! Use a remote
  trigger on the camera if necessary
- try to explore the range of relative motion between the camera and the
  grid as much as possible - tilt the grid and move it as far left/right,
  up/down, near/far as you can (while still being able to accurately
  detect the grid markers. CALTag should be able to handle angles of up to
  about 60 degrees



2) Sample usage:


See the individual m-files for descriptions of the parameters

Calibration = zhang_init( 'IMG_*.jpg', 'mycalib.mat' );
Calibration = zhang_detectcorners( 'mycalib.mat', './data/CALTagPattern_14x9.mat' );
Calibration = zhang_calibrate_init( 'mycalib.mat', true, true, true, false );
Calibration = zhang_calibrate_optimise( Calibration, false, false, 100 );
% to run it again for another 100 iterations from the current point:
Calibration = zhang_calibrate_optimise( Calibration, false, true, 100 );
save( 'mycalib.mat', 'Calibration', '-append' );
zhang_plotpoints( Calibration, 1 );


Alternatively, to avoid writing to disk all the time (but remember to save
afterwards!) specify just "Calibration" instead of "'mycalib.mat'" as the
first argument to all the function calls after the first



3) Advanced use only:


If you want to impose additional nonlinear constrains on the calibration
parameters, then look at zhang_calibrate_optimise_nonlcon.m and
zhang_constraint_eq.m. These may be adapted to your particular setup. For
example, if you capture images with the camera moving along a translation
stage, then you may know the relative distance between viewpoints. The
example constraint shows how this would be implemented. Call the
optimise_nonlcon.m function after calibrate_optimise.m, but be aware that
it is slower, and not tested nearly as much.



4) Misc notes:

note that disabling/enabling images during calibration currently doesn't
work without manually resetting the parameter vector. If you just change
the Active property of a particular image, then the parameter vector
can no longer be properly unpacked because its length is incompatible
with the number of images. you must call Calibration.x =
zhang_paramvector(Calibration) after changing any active images. This
would be nicer in an object-oriented framework, but that would require
rearchitecting this entire thing... oh, and remember then to run the
optimisation from the current point, not the initial point (so set the
last flag to calibrate_optimise to true)


To create an OpenGL perspective projection matrix from the intrinsic
calibration matrrix, see:
http://www.hitlabnz.org/forum/showthread.php?604-argConvGLcpara2-camera-calibration-matrix-to-OpenGl-projection-matrix
http://www.hitl.washington.edu/artoolkit/mail-archive/message-thread-00653-Re--Questions-concering-.html
and the function argConvGLcpara2 in ARToolkit
Basically it looks like:
[focal  skew      principle_x  0 ]
[ 0     aspect*f  principle*y  0 ]
[ 0      0           1         0 ]
After multiplying a point in camera coordinates (X,Y,Z,1) by this matrix
you will get a point (X',Y',Z') and then dividing by the 3rd component will
give (X'',Y'') where you drop the 3rd (one) component.
Although the actual OpenGL matrix is a little different. See the source
code to argConvGLcpara2 for the results


5) Contact:

Questions/comments/bugfixes welcome.
Email me at: atcheson [NOSPAM] [at] cs [dot] ubc [dot] ca