ushasi/Siamese-spatial-Graph-Convolution-Network

Pre-trained weights to extract the Graph features

Closed this issue · 5 comments

Hello,

As i see from your repo, pattGCN.mat features are extracted from another pre-trained model from git (https://github.com/nagmakhan/multi-label-analysis). But i could not obtained the weights for another data? Do you have the weights of the other model to extract these pattGCN features again?

Thanks.

Right. So for the patternNet dataset, or any other dataset for that fact, we used an initial model to construct the graph. This is provided in this (https://github.com/ushasi/Image-to-Region-Adjacency-Graph-creation) repository. This repository basically converts all the images in a dataset to a region adjacency graph and stores the weighted edge adjacency and node features in a .mat file.

Once you get these initial level features, you use https://github.com/nagmakhan/multi-label-analysis repository to get the initial GCN features from the entire graph structure of the dataset.

Finally, for the Siamese architecture, you use this repository to get the final robust features.

Thanks,

Hi,
I have also added the single-label GCN code in case you need that for single labeled data.

Hi,

I got the whole steps. Thank you for valuable explanations in detail.
I have initial features which are graphs structures as you explained.

I need to extract GCN features from repository ( https://github.com/nagmakhan/multi-label-analysis )
To extract the GCN features, we need pre-trained weights GCN model, right? or you trained the GCN model (in other repository) from scratch? I could not see the weight of the GCN model in other repository? If you used the pre-trained GCN model, is it possible for you to share the weights of GCN model you used with me?
I just want to compare your valuable studies with my work.

Once you get these initial level features, you use https://github.com/nagmakhan/multi-label-analysis repository to get the initial GCN features from the entire graph structure of the dataset.

Thank you so much

No we do not have any pre-trained weights for the GCN part. Just by using the image to RAG creation we are importing the node and edge features and using this RAG to feed to the GCN coding part ( https://github.com/nagmakhan/multi-label-analysis). However, once this is trained, you can check the last part of the code which would be in a separate loop...that part stores the features corresponding to each image. We use these features as initializing weights for the next SGCN framework.

There are no other separate pre-trained weights. We started training from scratch.