google-deepmind/ferminet

which distribution is used in the pretrain phase?

Closed this issue · 3 comments

As mentioned in the Ferminet paper, the probability distribution used in the pretrain phase is the average between the HF one and the output from Ferminet output.
Screen Shot 2021-01-26 at 3 01 01 PM

However, the implementation here seems doing something different:

In the master branch, even though we concat the walkers from both HF and Ferminet, but it seems only the ones from HF are used.

Screen Shot 2021-01-26 at 3 05 07 PM

(My understanding is that in the pretrain_hartree_fork function, only the first tf.distribute.get_replica_context(). num_replicas_in_sync walkers are used, which are basically the ones from HF).

On the other hand, in the JAX branch, it seems to me we are only using the ferminet output as the probability distribution when doing the pretrain. (Sorry I'm not that familiar with JAX. It's just that I can't find anything related to HF distribution around the pretrain code)

Is my understanding correct? Or does it mean the distribution used in pretrain is not that critical? Thanks!

Nice spot. Sorry, this is a bug in the TF code introduced when porting parallelisation over multiple GPUs from TF-Replicator to Distribution Strategy. The results in the FermiNet paper used TF-Replicator, which has a slightly different API to Distribution Strategy. With TF-Replicator, we are using the concatenated distribution on each GPU. The line you highlighted should read

concat_data = tf.concat([self.hf_data_gen.walkers, self.data_gen.walkers], axis=1)

We made a couple of different design decisions in the JAX version. We are currently only using the FermiNet as the distribution in pretraining. I think the pretraining distribution is not super critical -- the most important thing is to train the network such that it's closer to the ground state wavefunction and that the determinants aren't extremely low rank. Without pretraining, the network can start with an energy in the 1000s of hartrees (positive!) and optimising the network through that energy landscape is ... let's say painful. Certainly we don't see any noticeable difference in training FermiNet after pretraining in the TF or JAX codes.

Got it. Thanks a lot for the detailed explanation!

Whilst we don't believe this makes a significant difference, the TF version on master is now updated to sample from both distributions during pretraining.