BenevolentAI/MolBERT

Reproducing MolBERT results on QSAR tasks

Opened this issue · 2 comments

Hi MolBERT team,

First of all thank you for releasing this repository and providing the scripts to reproduce your paper,
it is deeply appreciated!

I have an issue reproducing the QSAR results from Table 3 in the paper for MolBERT and MolBERT (finetune),
as detailed below:

  1. I can exactly reproduce the Table 3 entries for RDKit and ECFC4 using scripts/run_qsar_test_molbert.py so that is reassuring
  2. The MolBERT featurizer, however, yields lower AUROCs i.e. for BACE I get 0.835 vs 0.849 from the paper and for BBBP I get versus 0.744 vs 0.750 in the paper.
  3. Similarly for MolBERT (finetune) using scripts/run_finetuning.py for BBBP I get 0.751 vs 0.762 reported in the paper

The pre-trained model I am using is the one provided in the README i.e. https://ndownloader.figshare.com/files/25611290

Could it somehow be that I am using the wrong weights, or the wrong weights were uploaded to figshare? This would effect the results in both 2. & 3. above so would make sense.

Finally, the parameters I have been using for the fine-tuning are the following:

  • freeze_level = 0 taken from the answer in #3
  • learning_rate = 3e-5 taken from the paper although I could only find the value for pre-training and not fine-tuning
  • batch_size=16

All other arguments are left to the defaults provided in the code. Should the above arguments reproduce results similar to the paper?

Thanks in advance!

Tom

pykao commented

Hi @TWRogers ,

I tried to reproduce the QSAR tasks using the script scripts/run_qsar_test_molbert.py but I find it is hard to run CDDD and MolBert in the same script. MolBert is supported by python 3.7 and CDDD is supported by python 3.6. There is also some tensorflow issues such as cuda and version mismatch if i upgrade CDDD to python 3.7. How did you solve this issue?

Best regards,
Po-Yu

pykao commented

@TWRogers ,

I also get poor predictions compared to the MolBert paper. Fine-tuned MolBert sometimes gets worse performance than MolBert without any fine-tuning. Maybe we need to get the details setting for finetuning MolBert.

Best regards,
Po-Yu