Thompson sampling
ekalosak opened this issue · 6 comments
We don't have an implementation of Thompson sampling, do we?
Almost nope! The closest we have is in this example: https://github.com/EmuKit/emukit/blob/master/emukit/examples/preferential_batch_bayesian_optimization/pbbo/acquisitions/thompson_sampling_acquisition.py
What's stopping this from being a fully fledged module in emukit?
The solution @apaleyes posted is maybe not the most optimal one due to it being rather slow. The solution is based on sequential sampling from the posterior and conditioning the future samples on the already drawn ones. The solution is very general and would suit for all GP models (not only the ones with Gaussian observation model). There are faster approximate strategies for Thompson sampling if the observation model is Gaussian.
My implementation doesn't fully follow the existing API. The main reason for this is the need to "reset" the Thompson sampling algorithm between iterations so it needs a separate reset-functionality. I don't know if it would be possible to solve this issue following the current API? However, all Thompson sampling algorithms would need this kind of functionality as they are essentially based on randomness. I think that all acquisition strategies that currently exist in emukit are deterministic, so this hasn't been thought when implementing the API. What do you think @apaleyes?
@esiivola when, in terms of the optimization loop, does it need to be reset? Is this reset done externally? I am absolutely certain we can do it either way.
@apaleyes it needs to be reset always before the optimization routine starts. The reset is done externally always before the acquisition function is being optimized (here:
I think it would probably be easiest to add an empty "reset"-method to emukit.core.Acquisition and only the acquisiton functions that need it could implement it? This reset could then always be called at the beginning of optimization.