cornellius-gp/gpytorch

{Question} How to compute the posterior covariance matrix on the training data?

Opened this issue · 2 comments

Let's say I have a GP with n training points. How to compute the nxn covariance matrix on the training data with the posterior GP.

def fit_full_model(train_X, train_Y):
    train_Yvar = torch.ones_like(train_Y).reshape(-1,1) * 1E-4
    fullmodel = FixedNoiseGP(train_X, train_Y.reshape(-1,1), train_Yvar)
    return fullmodel

train_X = torch.linspace(0, 1, 100).reshape(-1,1)
train_Y = torch.sin(train_X).reshape(-1,1)
model = fit_full_model(train_X, train_Y)
model.eval()

I believe the covariance matrix is encoded as a lazy tensor and never actually evaluated. But I do need access to it for a specific application.

In the FixedNoiseGP class, you should have an attribute covar_module that is an instance of a kernel (something like RBFKernel or ScaleKernel(RBFKernel) or similar, see example here https://docs.gpytorch.ai/en/stable/examples/01_Exact_GPs/Simple_GP_Regression.html . Then you can do fullmodel.covar_module(train_X).to_dense(). The to_dense() will evaluate the kernel and return a torch.Tensor object.

Assuming the model is in the evaluation mode, then predict_dist = model(test_x) yields the predictive distribution, which is a multivariate normal distribution. Then, the following yields the posterior covariance matrix

predict_dist.covariance_matrix.to_dense()