Dimension (Mis)matching - What's the correct usage of DeepSnap Heterogeneous Graph Convolution?
yrf1 opened this issue · 1 comments
I'm sorry I didn't find a good example code or documentation online that I can follow.
In example code below, how come the GNN didn't require a dimension 200/350/800 input?
from deepsnap.hetero_gnn import HeteroConv, HeteroSAGEConv, forward_op, loss_op
from deepsnap.hetero_graph import HeteroGraph
import networkx as nx
conv1 = {}
conv1[("n1","e0","n0")] = HeteroSAGEConv(800,600,200)
conv1[("n1","e1","n1")] = HeteroSAGEConv(350,600,350)
conv1 = HeteroConv(conv1)
G = nx.DiGraph()
G.add_node("n0", node_type="n0", node_feature=\
torch.zeros((1)).float())
G.add_node("n1", node_type="n1", node_feature=\
torch.zeros((1)).float())
G.add_edge("n0", "n1", edge_type="e1")
G = HeteroGraph(G)
G = conv1(G, G.edge_index)
Hello! I am just a user of this code base, rather than involved in the development but I might be able to help? Can I ask where your example code comes from?
To create a HeteroSAGEConv layer, the inputs are:
conv1[("src_node","link_label","dst_node")] = HeteroSAGEConv(num_features_src_type,hidden_dimension, num_features_dst_type)
where num_features_src_type
is the number of features for the node type of your source node and num_features_dst_type
is the number of features on the destination node.
That means that for the layer you define in your example, for conv1[("n1","e1","n1")]
, I would have expected
conv1[("n1","e1","n1")] = HeteroSAGEConv(800,600,800)
, if your desired hidden dimension is 600 and the number of features on node type n1
is 800, as it suggests from the definition of conv1[("n1","e0","n0")]
. I might be able to help more if you explain a bit more what you are trying to achieve in your example?