82magnolia/n_imagenet

Training a GNN on N-ImageNet

Closed this issue · 1 comments

Hello,
Thank you for making the dataset public.
I am working on object recognition using your dataset (mini N-ImageNet). Different from the CNN architecture used in your paper, I am exploring a graph neural network architecture AEGNN: Asynchronous Event-based Graph Neural Networks. From my experiments, I found that I get very good train and validation performance, but very low test performance (over 95% on the train and 20% on the test).
I was wondering if you have tried GNN architectures on this dataset? Given you have a lot of experience with the dataset, if you could share some insights about what could lead to a good train performance but poor test performance while using GNN models?
Thanks again for your time. We will cite your work in our research paper.
Regards.

Hi, I haven't tried testing GNNs on the dataset, but have faced similar issues when attempting to train standard CNNs on N-ImageNet (large train vs val gap). In my case the issue was that the receptive field of first layer of the standard CNN was too small. I was able to attain reasonable performance once I increased the first layer's kernel size to 14 (you can see this in our configs/ folder). Perhaps you can also try a similar trick (enhance the receptive field of the GNN).