steps:
- please download The English Reddit datasets according to the original paper: 《Commonsense knowledge aware conversation generation with graph attention》.
- dowmload the Bert-base model, and put into directory 'uncased_L-12_H-768_A-12'.
- use 'preprocess_entity_words.py' to generate datasets.
- run 'train_triple_2bert.sh' to train and test model.
We have run our expriements on 8 V100 GPU cards with batch size of 48.
The 'train_triple_2bert.sh' provided here uses 4 GPU cards with batch size of 24.