Regarding reference for performance of GCN "(pixels baseline) [19], low res" provided in table 2 for this work.
aryan-at-ul opened this issue · 2 comments
Hi,
I wanted to reach out to you regarding the accuracy of GCN (pixels baseline)[19] table 2, I was working on something very similar, but I needed some previous work to compare with, my performance is very poor not more than 34% when I checked the given reference it has no mention of GCN being applied for Cifar10 classification. Was that the result you gained when converting data in this format
data = [torch.from_numpy(np.concatenate((coord, avg_values), axis=1)).unsqueeze(0).float(), torch.from_numpy(A_spatial).unsqueeze(0).float(), False]
where the number of superpixels is equal to the total pixels present?
Any insight you can give me would be constructive. I am looking to refer to the application of GCN for cifar10 classification, where nodes in the graph represent pixels in the image.
regards
Arayn
Hi, do you get 34% without coordinate features? The GCN [19] pixels baseline in my Table 2 achieving 50.57% uses both pixel values and coordinates so each node has 5 features (R,G,B,x,y). See paragraph "Graph formation for images" describing that. Also, in this baseline the images are downsampled to 12x12, so each graph is a regular 2d grid with144 nodes.
The paper "Benchmarking Graph Neural Networks" also has more experiments (with strong results on superpixels) and their github implementation is available.
Thanks for the prompt response, I will add 2d grid i,j coordinate and then try, currently i experimented by adding x,y as ( i/w - 0.5, i/h - 0.5 ) based on positional augmentation in some previous work. Thank you will also try downsamping the image for baseline.
Thanks again.