EmuKit/emukit

How to speed up the training and prediction with GPU like GPyTorch and GPFlow?

intellisense-team opened this issue · 1 comments

HI,

I just want to know how to speed up the training and prediction with GPU like GPyTorch and GPFlow? Since the amount of the data is huge like 100,000 points. I really want to realize the multi-fidelity models with GPU support.

Can someone give me some advices?

Hi there! Have a look at the multi-fidelity deep GP example implemented in Emukit: https://github.com/EmuKit/emukit/tree/main/emukit/examples/multi_fidelity_dgp

It is built with GPFlow 1.x and provides an Emukit-compatible model. I don't now how much data it is able to process though, but worth a try. The Cutajar et al. paper that this implementation was created for reportedly run experiments on 800 000 low fidelity points.