/CML-Proj

Primary LanguageJupyter NotebookMIT LicenseMIT

CML Serum Cortisol Level Deconvolution

The given data consists of a sparse vector u which was observed at times tu. Along with it is given the partial data for the serum cortisol concentration y which was sampled at regular intervals ty. This paper defines a model (with the unknowns theta_1 and theta_2 as the model parameters) that establishes a relationship between the discrete y and sparse u

The task is to reconstruct the complete y based on the data available. To do this, we first compute optimal values of the parameters theta_1 and theta_2 that give the least error rate. Once we have these values, we reconstruct a continuous y as per the model.

Methodology

Scipy and Numpy libraries were used to solve this problem, we first formulate an optimization problem for the model and try to minimize the objective function - |y - A*y_0 - B*U|^2 in our case.

The code is presented in a Jypyter notebook and the implementation is as follows

  • Read the given data from the file and process it according to our use case
  • Define the model as per the paper and the objective function as mentioned above.
  • Initialize the parameters theta_1 and theta_2 with any random non equal values.
  • Use scipy.minimize method to minimize the objective function over the parametrs theta_1 and theta_2. The Nelder-Mead optimization was used to solve this problem.
  • Once we obtain the estimated values of theta_1 and theta_2, we use that to estimate the complete signal of y.
  • We plot the predicted_y against observed_y on the notebook and present it at the end.

Results

  • The theta_1 and theta_2 values were initialized randomly but converged at values close to ~ 0.73 and 0.006 respectively.
  • The estimated_y was plotted against the observed_y and is shown in the plot marked at the end of the Python notebook.

References

[1] Deconvolution of Serum Cortisol Levels by Using Compressed Sensing

[2] Nelder-Mead method