tech-srl/how_attentive_are_gats

Reproducing results on proteins

adrianjav opened this issue · 4 comments

Hi!

I was trying to reproduce the results on the proteins dataset on a V100 GPU, and I am running into a few problems. First I had some issues with the BatchSampler (samplers cannot be passed to iterative datasets), so I just removed it (I am using the latest version of dgl, since 0.6 is not available with my CUDA version).

After fixing that, I came across this error message
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 64])
in line 98 of models.py

for i in range(self.n_layers):
h = self.convs[i](subgraphs[i], h).flatten(1, -1)
if h_last is not None:
h += h_last[: h.shape[0], :]
h_last = h
h = self.norms[i](h)
h = self.activation(h, inplace=True)
h = self.dropout(h)

Is this error related with removing the BatchSampler? And which tensor size is it expected at that line?

Thanks for the help

Hi @adrianjav!

The error you describe happens when you feed the BatchNorm layer with batchsize=1.
In this case, that means that there is only one subgarph:

for input_nodes, output_nodes, subgraphs in tqdm(dataloader, leave=False, desc=f"Training epoch {epoch}"):
subgraphs = [b.to(device) for b in subgraphs]
new_train_idx = torch.arange(len(output_nodes))
train_pred_idx = new_train_idx
pred = model(subgraphs)

Since you removed the batch sampler, you probably damaged this logic.

I guess that the solution will be to replace the following in proteins_exp.py

- from dgl.dataloading.pytorch import NodeDataLoader
+ from dgl.dataloading import DataLoader

lines 159-157:

+   train_dataloader = DataLoader(
-   train_dataloader = DataLoaderWrapper(
-       NodeDataLoader(
            graph.cpu(),
            train_idx.cpu(),
            train_sampler,
-          batch_sampler=BatchSampler(len(train_idx), batch_size=train_batch_size),
+         batch_size=train_batch_size,
            num_workers=4,
-       )
    )

lines 171-179:

+   eval_dataloader = DataLoader(
-   eval_dataloader = DataLoaderWrapper(
-       NodeDataLoader(
            graph.cpu(),
            torch.cat([train_idx.cpu(), val_idx.cpu(), test_idx.cpu()]),
            eval_sampler,
-           batch_sampler=BatchSampler(graph.number_of_nodes(), batch_size=32768),
+           batch_size=32768,
            num_workers=4,
-       )
    )

Please update me on how it goes!

Thanks for the prompt reply.

Quite an oversight on my side. I added the batch size, but I was still running on problems regarding the memory usage. I fixed it doing the following (running at the moment, but it seems ok).

+   train_dataloader = NodeDataLoader(
-   train_dataloader = DataLoaderWrapper(
-       NodeDataLoader(
            graph.cpu(),
            train_idx.cpu(),
            train_sampler,
-          batch_sampler=BatchSampler(len(train_idx), batch_size=train_batch_size),
+         batch_size=train_batch_size,
-           num_workers=4,
+           num_workers=0,
-       )
    )

Out of curiosity: What were the specs of the machine you used for the experiments? I am using a V100 with 16GB of RAM, and with 2 workers, memory usage skyrocketed to 14GB in 10seconds.

We also used V100 with 16GB of RAM.

I don't know how the changes in the new version of DGL affect memory usage (if any at all).

Please have a look at the arguments in run_proteins.sh and the default arguments in proteins_exp.py which were used for our experiments:

argparser = argparse.ArgumentParser(
"GAT implementation on ogbn-proteins", formatter_class=argparse.ArgumentDefaultsHelpFormatter
)
argparser.add_argument('--device', type=int, default=0, help="GPU device ID")
argparser.add_argument("--seed", type=int, default=0, help="random seed")
argparser.add_argument("--n-runs", type=int, default=10, help="running times")
argparser.add_argument("--n-epochs", type=int, default=1200, help="number of epochs")
argparser.add_argument("--n-heads", type=int, default=8, help="number of heads")
argparser.add_argument("--lr", type=float, default=0.01, help="learning rate")
argparser.add_argument("--n-layers", type=int, default=6, help="number of layers")
argparser.add_argument("--n-hidden", type=int, default=64, help="number of hidden units")
argparser.add_argument("--dropout", type=float, default=0.25, help="dropout rate")
argparser.add_argument("--input-drop", type=float, default=0.1, help="input drop rate")
argparser.add_argument("--attn-drop", type=float, default=0.0, help="attention dropout rate")
argparser.add_argument("--edge-drop", type=float, default=0.1, help="edge drop rate")
argparser.add_argument("--wd", type=float, default=0, help="weight decay")
argparser.add_argument("--eval-every", type=int, default=5, help="evaluate every EVAL_EVERY epochs")
argparser.add_argument("--log-every", type=int, default=5, help="log every LOG_EVERY epochs")
argparser.add_argument("--save-pred", action="store_true", help="save final predictions")
argparser.add_argument("--type", type=str, default="DPGAT", help="GAT type")
argparser.add_argument("--patient", type=int, default=10, help="early stopping")
argparser.add_argument("--min_epoch", type=int, default=120, help="run at least MIN_EPOCHs")
argparser.add_argument("--max_loss", type=float, default=0.3, help="run at least MIN_EPOCHs")
args = argparser.parse_args()

Maybe you use other hyperparameters that affect memory usage.

That has to be it, library differences. I just cloned the repo and ran bash run_proteins.sh as it is. Funny enough, with n_workers=0 I am using only 3GB of memory.

Anyways, it seems that now I can run your experiments, with the only difference that I am not using BatchSampler, but I guess it is enough to set shuffle=true in NodeDataLoader to obtain the same effect.

I will close the issue and open a new one if I find myself into more problems.

Thanks for the help!