Accessing Embedded Space & Decoder Inquiry
jessxphil opened this issue · 2 comments
Thank you in advance for your help!
I don't want to use up too much of your time so I'll try to write a few concise questions.
-
I ran the 'Train' code but the preloaded model didn't update according to the new weights/bias from my personal data. How can I make sure I'm training the model according to my data?
-
I'm looking to extract the embedded space to use as input features for another model. Just to confirm, is that embedded space stored in the 'pred' variable from the Train.py? [Note: pred = model(emb_as, emb_bs) #Line 131 or pred = pred.argmax(dim=-1) #Line 148]
ALSO There's another embedded space variable from the Analyze_Embedding.pybd. [Note: embs = model.emb_model(batch).detach().cpu().numpy() #Line 79]
- Can I use the 'Decoder' on the extracted embedded space to retrieve my graph data?
Thanks for your interest!
- See the
experimental
branch (#16 (comment)) and the suggested workaround in that thread for a way to train on custom datasets while making use of node features. - To get a graph's embedding, use
embs = model.emb_model(batch).detach().cpu().numpy()
. See for an example. - The subgraph mining component only finds frequent graphs that are contained in a given region of the embedding space; in general it's quite tricky to decode from an arbitrary point in the embedding space back to a graph, since most points in the space might not exactly correspond to any graph. If possible, it's usually best to maintain both a graph and its embedding together.
Thank you for the clear explanation! I really appreciate your help!