This repository contains the implementation of the following algorithms:
- TTransE
- TA-TransE
- DE-TransE
- TA-DistMult
- DE-DistMult
- TA-ComplEx
- DE-ComplEx
- TA-SimplE
- DE-SimplE
- TA-RotatE
- DE-RotatE
Before installing horovod
check the steps here.
pip install -r requirements.txt
Run the following script to install the necessary requirements for using TPUs.
VERSION = "xrt==1.15.0"
curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
python pytorch-xla-env-setup.py --version $VERSION
Sample distributed CPU/GPU training configuration with 2 processes:
horovodrun -np 2 -H localhost:2 python -BW ignore main.py \
--dataset [DATASET] \
--model DEDistMult \
--dropout 0.2 \
--embedding-size 128 \
--learning-rate 0.01 \
--epochs 100 \
--batch-size 256 \
--negative-samples 64 \
--filter \
--mode head \
--validation-frequency 2 \
--threads 2 \
--workers 1
Sample distributed TPU training configuration with 2 processes:
NPROC=1 python -BW ignore main.py --dataset [DATASET] \
--model DEDistMult \
--dropout 0.2 \
--embedding-size 128 \
--learning-rate 0.001 \
--epochs 100 \
--batch-size 1024 \
--negative-samples 64 \
--filter \
--mode head \
--validation-frequency 10 \
--tpu \
--threads 4 \
--workers 1
You can use the --aux-cpu
switch to enable mixed CPU training.
To see the list of all available options use the following command:
python main.py --help