lu-group/sbinn

get_variable function

Vikramank opened this issue · 7 comments

def get_variable(v, var): low, up = v * 0.2, v * 1.8 l = (up - low) / 2 #mean v1 = l * torch.tanh(var) + l + low return v1
Could you please explain the idea behind the preprocessing of the variables?

This model is meant to learn patient specific parameters. We could let the network search all possible parameter values (unconstrained); however, we have literature that provides ranges of possible values for the model. As such, we limit the search to those ranges (0.2 to 1.8 of the nominal value). This will help the model train faster by reducing the possible combinations of values. It can also be used as a constraint to look at only realistic values, such as is a parameter cannot be less than zero - that can be specified to assist the network.

Thanks for the explanation. It makes total sense to embed prior knowledge to speed up the search and match realistic values

This model is meant to learn patient specific parameters. We could let the network search all possible parameter values (unconstrained); however, we have literature that provides ranges of possible values for the model. As such, we limit the search to those ranges (0.2 to 1.8 of the nominal value). This will help the model train faster by reducing the possible combinations of values. It can also be used as a constraint to look at only realistic values, such as is a parameter cannot be less than zero - that can be specified to assist the network.

Thanks for the ideas! I tried something similar, scaling the parameters to fit within the confidence intervals. The parameters of the model used to generate the data are the mean values in the table. However, it can be seen that the estimated parameters in the results are almost close to the edge of the confidence interval, which obviously means that the current work has not played a role. Is it because of the code There may be other problems in the model, such as a problem with the model? Or is this method not applicable to the problem I am currently studying? Or maybe the model is stuck in a bad local optimum?
The order of the output parameter values is slightly different from that in the table, sorry if it affects the reading!

1adc07671ec2c7bfe0d1f88bfc7a9bb

The issue could be a number of things. Without seeing your code all I can provide is the following guidelines of common problems:

  1. Make sure your ODE is set up correctly
  2. Make sure the transformation of the variables is set up correctly in the ODE
  3. Make sure any IC or BC you have are being used properly with DeepXDE
  4. Make sure you have enough data points. You have ~20 parameters, in theory you will need at least 20 datapoints but ideally more.

Feel free to open a new issue if you cannot solve the problem you are facing. It may be helpful to implement structural and practical identifiability analyses to ensure your model and data are able to provide what you need.

The issue could be a number of things. Without seeing your code all I can provide is the following guidelines of common problems:

  1. Make sure your ODE is set up correctly
  2. Make sure the transformation of the variables is set up correctly in the ODE
  3. Make sure any IC or BC you have are being used properly with DeepXDE
  4. Make sure you have enough data points. You have ~20 parameters, in theory you will need at least 20 datapoints but ideally more.

Feel free to open a new issue if you cannot solve the problem you are facing. It may be helpful to implement structural and practical identifiability analyses to ensure your model and data are able to provide what you need.

Thank you for your analysis, but the results of my previous model are not bad, so there is no problem with the high probability of the data set and ODE model, but because the parameters are estimated from the full range, the result of the parameter estimation is very unreasonable, just added For this scaling work, consider using such a scaling method to force them to be compressed within the confidence interval. Unfortunately, when the model is learning, the parameter values ​​have been trying to approach the boundary. Obviously this is unreasonable, and they have been far away from the correct parameters.
The code for parameter transformation is as follows:

def get_variable(variable, idx):
   # CI is the confidence interval(low,hight)
   l, r = ci[idx]
   m = (l + r) / 2
   res = m + tanh(variable) * (r - l) / 2
   return res
# k_weight is the magnitude of the parameter
k1 = get_variable(k1_, 0) * k_weight[0]
k2 = get_variable(k2_, 1) * k_weight[1]
k3 = get_variable(k3_, 2) * k_weight[2]
k4 = get_variable(k4_, 3) * k_weight[3]
k5 = get_variable(k5_, 4) * k_weight[4]
k6 = get_variable(k6_, 5) * k_weight[5]
k7 = get_variable(k7_, 6) * k_weight[6]
k8 = get_variable(k8_, 7) * k_weight[7]
k9 = get_variable(k9_, 8) * k_weight[8]
k10 = get_variable(k10_, 9) * k_weight[9]
k_1 = get_variable(k_1_, 10) * k_weight[10]
k_2 = get_variable(k_2_, 11) * k_weight[11]
k_3 = get_variable(k_3_, 12) * k_weight[12]
k_4 = get_variable(k_4_, 13) * k_weight[13]
k_5 = get_variable(k_5_, 14) * k_weight[14]
k_6 = get_variable(k_6_, 15) * k_weight[15]
k_7 = get_variable(k_7_, 16) * k_weight[16]
k_8 = get_variable(k_8_, 17) * k_weight[17]
k_9 = get_variable(k_9_, 18) * k_weight[18]
k_10 = get_variable(k_10_, 19) * k_weight[19]

If this part of the code is not enough to see the problem, I should try to create an issue, considering that the entire source code may be a bit long, and in order to simplify the number of lines of code, I encapsulate a lot of static data as data files instead of writing directly into the code, it might be a bit inconvenient to read, so I haven't opened an issue, maybe I should build a repository?

From what you have I cannot determine any issues. I would need to see what ci is, and your hyperparameter setup. If you build a repository and share I can take a look at what you have to assist.

From what you have I cannot determine any issues. I would need to see what ci is, and your hyperparameter setup. If you build a repository and share I can take a look at what you have to assist.

ci means Confidence Interval, Its value is read from the file, and its shape is (20,2), because there are 20 parameters, and the confidence interval of each parameter is represented by (low, height).

I've got the repository set up, thanks for your willingness to help!
https://github.com/chenyv118/SBINN-Biodiesel
Welcom!