Reproducing results on proteins
adrianjav opened this issue · 4 comments
Hi!
I was trying to reproduce the results on the proteins
dataset on a V100 GPU, and I am running into a few problems. First I had some issues with the BatchSampler (samplers cannot be passed to iterative datasets), so I just removed it (I am using the latest version of dgl, since 0.6 is not available with my CUDA version).
After fixing that, I came across this error message
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 64])
in line 98 of models.py
how_attentive_are_gats/proteins/models.py
Lines 90 to 100 in b5ccb61
Is this error related with removing the BatchSampler? And which tensor size is it expected at that line?
Thanks for the help
Hi @adrianjav!
The error you describe happens when you feed the BatchNorm layer with batchsize=1.
In this case, that means that there is only one subgarph:
how_attentive_are_gats/proteins/proteins_exp.py
Lines 90 to 96 in b5ccb61
Since you removed the batch sampler, you probably damaged this logic.
I guess that the solution will be to replace the following in proteins_exp.py
- from dgl.dataloading.pytorch import NodeDataLoader
+ from dgl.dataloading import DataLoader
lines 159-157:
+ train_dataloader = DataLoader(
- train_dataloader = DataLoaderWrapper(
- NodeDataLoader(
graph.cpu(),
train_idx.cpu(),
train_sampler,
- batch_sampler=BatchSampler(len(train_idx), batch_size=train_batch_size),
+ batch_size=train_batch_size,
num_workers=4,
- )
)
lines 171-179:
+ eval_dataloader = DataLoader(
- eval_dataloader = DataLoaderWrapper(
- NodeDataLoader(
graph.cpu(),
torch.cat([train_idx.cpu(), val_idx.cpu(), test_idx.cpu()]),
eval_sampler,
- batch_sampler=BatchSampler(graph.number_of_nodes(), batch_size=32768),
+ batch_size=32768,
num_workers=4,
- )
)
Please update me on how it goes!
Thanks for the prompt reply.
Quite an oversight on my side. I added the batch size, but I was still running on problems regarding the memory usage. I fixed it doing the following (running at the moment, but it seems ok).
+ train_dataloader = NodeDataLoader(
- train_dataloader = DataLoaderWrapper(
- NodeDataLoader(
graph.cpu(),
train_idx.cpu(),
train_sampler,
- batch_sampler=BatchSampler(len(train_idx), batch_size=train_batch_size),
+ batch_size=train_batch_size,
- num_workers=4,
+ num_workers=0,
- )
)
Out of curiosity: What were the specs of the machine you used for the experiments? I am using a V100 with 16GB of RAM, and with 2 workers, memory usage skyrocketed to 14GB in 10seconds.
We also used V100 with 16GB of RAM.
I don't know how the changes in the new version of DGL affect memory usage (if any at all).
Please have a look at the arguments in run_proteins.sh and the default arguments in proteins_exp.py which were used for our experiments:
how_attentive_are_gats/proteins/proteins_exp.py
Lines 260 to 283 in b5ccb61
Maybe you use other hyperparameters that affect memory usage.
That has to be it, library differences. I just cloned the repo and ran bash run_proteins.sh
as it is. Funny enough, with n_workers=0
I am using only 3GB of memory.
Anyways, it seems that now I can run your experiments, with the only difference that I am not using BatchSampler, but I guess it is enough to set shuffle=true
in NodeDataLoader
to obtain the same effect.
I will close the issue and open a new one if I find myself into more problems.
Thanks for the help!