Converting between Deeprobust and pytorch geometric
akul-goyal opened this issue · 9 comments
Hi,
I am trying to attack my own custom model using the topology attack listed graph/global_attack. The attack runs fine until here where I have a modified adj matrix that I am looking to pass into my pytorch geometric model. Given it is in an adj matrix format rather than the edge_index format that pytorch geometric uses, how can I convert my adj_matrix to a format that can be used by pytorch without losing any of the gradients that are later needed for the backprop on line. I tried doing adj_matrix.nonzero() but that gets rid of the gradients.
More simply put, can you attack GAT in deeprobust using the topology attack?
Hi,
The current code does not support attacking GAT using the topology attack. You may refer to the PRBCD attack to implement the attack with PyG.
However, we will soon include PRBCD in deeprobust based on PyG (maybe in one month). Please stay tuned :)
Hi @ChandlerBang,
Thanks for the quick response. Is it possible to provide some intuition on how PRBCD is attacking PYG? In the current format of deep robust, a dense matrix gets modified. When I convert between deep robust to PYG the gradients of the dense matrix are lost. How can you preserve this? Furthermore, I am using a neighborsampler for pyg so is there a way to pass in the dense matrix to the neighbor sampler without losing the gradients?
Hey, you may take a look at their paper first. Basically, at each step they will sample a block of edge indices (a small portion of all the possible edge indices) and optimize the edge weights by gradient descent.
@akul-goyal Hi
I meet the same quetions. I'm trying to implement topology on gat, first, i use pyg to do it and use the same means as you, then to avoid the probleam of adj, i use pytorch, but that both get rid of the gradients. do you solve it?
Yes, I used the following for help! pyg-team/pytorch_geometric#1511
Thanks a lot
@akul-goyal Hi
I'm sorry to bother you again. According to the link you provided, I do following modifications:
edge_index = adj.nonzero().t()
row, col = edge_index
edge_weight = adj[row, col]
and it solves the problems of vanishing gradients. And I use modified_adj as input as gat. But the test accuracy of modified_adj is not apprent decline. I wonder if your topology attack is valid. And can you show me some codes about your modification if available? Thank you.
Hey, I am not sure if I understand you correctly, but based on your code, nonzero() does not preserve gradients. So that may be the problem,
The aims of nonzeros is convert dense adj to edge_index which is suit for pyg. So if i use pyg, what should i do?
Here is the code.
The victim model is GATConv is provided by pyg.
for t in tqdm(range(epochs)):
# update victim model
victim_model.train()
modified_adj = self.get_modified_adj(ori_adj)
adj_norm = utils.normalize_adj_tensor(modified_adj)
edge_index = (adj_norm > 0).nonzero().t()
row, col = edge_index
edge_weight = adj_norm[row, col]
output = victim_model(ori_features, edge_index, edge_weight)
loss = self._loss(output[idx_train], labels[idx_train])
optimizer.zero_grad()
loss.backward()
optimizer.step()
# generate pgd attack
victim_model.eval()
modified_adj = self.get_modified_adj(ori_adj)
adj_norm = utils.normalize_adj_tensor(modified_adj)
edge_index = (adj_norm > 0).nonzero().t()
row, col = edge_index
edge_weight = adj_norm[row, col]
output = victim_model(ori_features, edge_index, edge_weight)
loss = self._loss(output[idx_train], labels[idx_train])
adj_grad = torch.autograd.grad(loss, self.adj_changes)[0]
could you relpy me if you konw the probleams in your free time. Thank you.