Then, you need to create a directory for recoreding finetuned results to avoid errors:
mkdir logs
./go.sh $GPU_ID $DATASET_NAME $AUGMENTATION
$DATASET_NAME
is the dataset name (please refer to https://chrsmrrs.github.io/datasets/docs/datasets/), $GPU_ID
is the lanched GPU ID and $AUGMENTATION
could be random2, random3, random4
that sampling from {NodeDrop, Subgraph}, {NodeDrop, Subgraph, EdgePert} and {NodeDrop, Subgraph, EdgePert, AttrMask}, seperately.
./clsa.sh $GPU_ID $DATASET_NAME $AUGMENTATION $STRO_AUGMENTATION
$DATASET_NAME
is the dataset name (please refer to https://chrsmrrs.github.io/datasets/docs/datasets/), $GPU_ID
is the lanched GPU ID and $AUGMENTATION
could be random2, random3, random4
that sampling from {NodeDrop, Subgraph}, {NodeDrop, Subgraph, EdgePert} and {NodeDrop, Subgraph, EdgePert, AttrMask}, seperately.
$STRO_AUGMENTATION
is {stro_dnodes, stro_subgraph}. Such as:
./clsa.sh 2 COLLAB dnodes stro_dnodes
.
CUDA_VISIBLE_DEVICES=2 python gbyol.py --DS COLLAB --lr 0.01 --aug dnodes --stro_aug stro_dnodes
Or
./byol.sh $GPU_ID $DATASET_NAME $AUGMENTATION $STRO_AUGMENTATION
$DATASET_NAME
is the dataset name (please refer to https://chrsmrrs.github.io/datasets/docs/datasets/), $GPU_ID
is the lanched GPU ID and $AUGMENTATION
could be random2, random3, random4
that sampling from {NodeDrop, Subgraph}, {NodeDrop, Subgraph, EdgePert} and {NodeDrop, Subgraph, EdgePert, AttrMask}, seperately.
$STRO_AUGMENTATION
is {stro_dnodes, stro_subgraph}.
- The backbone implementation is reference to: https://github.com/fanyun-sun/InfoGraph/tree/master/unsupervised.
- The BYOL implementation is reference to: https://github.com/lucidrains/byol-pytorch