Python library to model and interpolate the magnetic field. The objective is to apply it on the EMT system developed by Biomedical Design Laboratory (University College Cork, Ireland) to mitigate the distortions during a surgical operation. The challenge is to develop a software with real-time performances, which has to assist the operator during the calibration of the instrument in a fast and simple way.
For each described algorithm, the training set is a cloud of randomly sampled points inside a fixed volume. The validation set is a regular grid of points in which the three magnetic field components have to be predicted. The cuboids plots shows only the magnitude, while the prediction of the three magnetic field components are shown along the diagonal that starts from a corner of the cuboid and ends up to the opposite one. Here only a brief description of the main results with plots, the numeric results and the theory behind them are left for the paper.
The following plots take into considerations different metrics vs the number of random points in the training set. The evaluations are performed over the same validation set.
Since in a realistic scenario the magnetic field is not known, to know if the model is doing well, the standard deviation of each prediction has to be considered. So, for the following plots each point marker size corresponds to its level of 'uncertainty'.
The next two .gif show the evolution of the confidence intervals during the training, for each one of the three magnetic field components measured on the diagonal of the cube. The first one is a simulation with measurements not affected by noise, the second one yes, with an SNR between the standard deviation and the RMS of -60 dB.
Using 24-dim points (8 coils x 3 magnetic fields components) instead of using only the first coil, the results are better, and the required computational time still remains acceptable. For instance, the correlation between the error and the uncertainty of the prediction starts to seem linear, as shown in the following plot.
Fixing the hyperparameter of the kernel, i.e. the length scale, we can obtain a vector of weights simply solving a linear system of equations for each component. Computing the kernel matrix of the training points, only one line of code is enough to obtain so
W = np.linalg.solve(sklearn.metrics.pairwise.rbf_kernel(training_positions, gamma=gamma), magnetic_field_measurements)
and then, computing the kernel between the training points vs the validation ones, after a matrix multiplication we are able to predict the magnetic field, as shown in the following .gif
and the following plot shows the correlation between the nMAE and the standard deviation (computed through the Cholensky decomposition) for each one of the 24-dim points
Taking into account also the orientation information, and with the uniaxial measures (one-dimensional measures of the magnetic field instead of the three components, allowing a smaller sensor), it is possible to interpolate the magnetic field changing the definition of the radial basis function.
These algorithms are useful to calibrate the instrument. Collecting sensor's data it is possible to obtain a map of the static magnetic field, and the real time feedback from the program is useful to understand which area has to be covered during the sampling. This is the first interface of the program:
in which:
- each cube shows a fixed grid of points from red to green, that means not-covered and well-covered, respectively
- the position and orientation of the sensor are shown as a cone in each cube
- the first cube shows the uncertainty relative to the x-component of the field, the second cube is relative to the y-component, and the third is for z
- each cube is independent in terms of visualization, so it is possible to zoom, move and rotate them as you want, even during the sampling.