brian-team/brian2modelfitting

Store simulation/training results

Closed this issue · 4 comments

The first stages of the sbi process (simulation and training of the network) are time-consuming. The results can be reused for several experimental data sets, so it would be wasteful to simulate more often than necessary. Check whether sbi provides mechanisms to store/load these results and wrap them as part of the class created for #44 .

This is covered in FAQs in the official sbi documentation here.

NeuralInference objects are not picklable and the proposed way of saving these objects is by using dill.

Since NeuralPosterior objects are picklable , storing is as simple as:

import pickle

posterior = ...

with open("/path/to/posterior.pkl", "wb") as handle:
    pickle.dump(posterior, handle)

Thanks for the info. It might be worth looking into the sbi code (or contacting the developers) to see whether there are alternatives to pickle/dill, e.g. if we can directly store the pytorch model state dictionary to disk (see pytorch docs). The problem with pickle-based approaches is that it stores more than we want (class structure, module names, etc.) which can easily break when switching between machines/versions/etc.

Once we have NeuralPosterior, we can access its state dictionary through net instance by calling state_dict():

posterior.net.state_dict()

Then we can use PyTorch to save this state dictionary, which is basically ordered dictionary and is really easy to handle and is quite lightweight compared to pickled objects.
Here is the minimal working example where the storing of the neural density estimator's state dictionary is demonstrated.

import torch
from sbi import utils as utils
from sbi import analysis as analysis
from sbi.inference import SNPE, prepare_for_sbi, simulate_for_sbi


def simulator(params):
    return params + torch.randn(params.shape) * 0.1


# data
prior = utils.BoxUniform(low=-2*torch.ones(3), high=2*torch.ones(3))
observation = torch.zeros(3)

# learning the density estimator and building the posterior
simulator, prior = prepare_for_sbi(simulator, prior)
inference = SNPE(prior)
theta, x = simulate_for_sbi(simulator, proposal=prior, num_simulations=500)
density_estimator = inference.append_simulations(theta, x).train()
og_posterior = inference.build_posterior(density_estimator)

# sampling from the posterior
samples = og_posterior.sample((10000,), x=observation)
log_probability = og_posterior.log_prob(samples, x=observation)
_ = analysis.pairplot(samples)

# save the density estimator state dictionary
torch.save(og_posterior.net.state_dict(), 'psd.pth')

and to load it:

# build new "empty" posterior
new_posterior = inference.build_posterior()

# load the state dictionary
new_posterior.net.load_state_dict(torch.load('psd.pth'))

# sampling from the new posterior
samples = new_posterior.sample((10000,), x=observation)
log_probability = new_posterior.log_prob(samples, x=observation)
_ = analysis.pairplot(samples)

Everything works like a charm.

Closed via #52