Bugs of subgraph augmentation
hyp1231 opened this issue · 4 comments
Hi,
when browsing the code, I found there might be bugs in the subgraph augmentation.
In Python 3.7+, the union
operation is not an inplace operation, so that idx_neigh
will not be updated properly. In this situation, only 1-hop subgraph of a random centor node will be generated.
Some examples:
GraphCL/unsupervised_TU/aug.py
Line 371 in e9e598d
@hyp1231 Thank you for your carefulness! You are right that the current version only contrasting with small one-hop subgraphs (similar to InfoGraph, global-local contasting), due to my typo. In my later version of the my internal code (will be released in the future), I fixed this bug (simply let idx_neigh=idx_neigh.union...
) and found that it did not have much influence on performance. Thus, for reproduce purpose it is suggested to stick to the current shape, but for development it is better to fix it.
A sample of the improved subgraph function:
import torch_geometric.utils as tg_utils
def subgraph(data, aug_ratio):
G = tg_utils.to_networkx(data)
node_num, _ = data.x.size()
_, edge_num = data.edge_index.size()
sub_num = int(node_num * (1-aug_ratio))
idx_sub = [np.random.randint(node_num, size=1)[0]]
idx_neigh = set([n for n in G.neighbors(idx_sub[-1])])
while len(idx_sub) <= sub_num:
if len(idx_neigh) == 0:
idx_unsub = list(set([n for n in range(node_num)]).difference(set(idx_sub)))
idx_neigh = set([np.random.choice(idx_unsub)])
sample_node = np.random.choice(list(idx_neigh))
idx_sub.append(sample_node)
idx_neigh = idx_neigh.union(set([n for n in G.neighbors(idx_sub[-1])])).difference(set(idx_sub))
idx_nondrop = idx_sub
idx_nondrop.sort()
edge_index, _ = tg_utils.subgraph(idx_nondrop, data.edge_index, relabel_nodes=True, num_nodes=node_num)
data.x = data.x[idx_nondrop]
data.edge_index = edge_index
data.__num_nodes__, _ = data.x.shape
return data
@yyou1996 Thanks for your kind and quick response!
BTW, another concern is, whether controling the scale of subgraph has a slight influence to the performance. In the original version of the released code, we have sub_num = int(node_num * aug_ratio)
(which is more similar to the original one-hop InfoGraph), while in the sample code just pasted, we have sub_num = int(node_num * (1-aug_ratio))
instead.
GraphCL/transferLearning_MoleculeNet_PPI/bio/loader.py
Lines 288 to 292 in 7eefcc3
According to my limited observation, I dont feel it affects too much on performance, compared with the augmentation type. Of course, we did not do explicit ablation on the augmentation strength. I feel type of aug represents more of the prior, rather than strength. Also, I am happy to hear some other opinions.
Thanks, it's clear and makes sense. Feel free to close this issue. Thank you again.