Implement of fusion operation in MGCN
Opened this issue · 9 comments
Thanks for your great work!
And I didn't find the exact code that implements of fusion operation in MGCN(Eq.10 in your CVPR paper). I think it's the key to use relationship information correctly.
Could you please answer my doubts, thanks!
This part can not improve the performance so much, thus I delete them in the updated version for faster training.
I am preparing to reproduce the SGAE project, but due to computer configuration, the original author's tsv data set cannot completely generate cocobu_att, cocobu-box,cocobu_fc. I see your problem under the author's project. I hope I can ask for your help. Can I send the three folders you generated to my mailbox, 997932544@qq.com? Thank you very much and good luck.
@yangxuntu what do you mean deleting this part?when training, use_rela is set to 0,then how does MGCN work? also,in your paper,Urij and Uai are obtained in a way similar to formula(10),how do you get Vrij and Vai? Are the ROI features of relationships available in the pre-processed files?thank you.
@yangxuntu thank you,but i still do not quite understand.because in https://shiyaya.github.io/2019/03/16/SAGE-Auto-Encoding-Scene-Graphs-for-Image-Captioning/ ,In the image encoder, the input part includes v_r,the relation ROI feature,which is also 2048 dimension. As I know, the object ROI feature is obtained through bottom up attention.What about the RELATION ROI feature?Is it to combine the boxes of subjects and objects involved in a set of relationships, and then use bottom up attention to obtain the corresponding ROI feature?It seems that there is no relation ROI feature in the preprocessed file,should I check it again?
@yangxuntu ok .thanks
In this code, I do not provide this part since I write about more than 10 different files about my model, this part of the code is contained in another file. Because I am not good at managing all the codes to write them as a perfect project, I just provide one file which contains the most important part of the whole framework. V_r is the feature extracted from MOTIF, which is different from V_O, this is why I do not provide this part of code because if I provide them, I need to upload a new file about the feature extractor and a new dataloader file. I am a naive coder that time
…
________________________________ 发件人: zhangchenghua123 notifications@github.com 发送时间: 2020年9月6日 16:54 收件人: yangxuntu/SGAE SGAE@noreply.github.com 抄送: #YANG XU# S170018@e.ntu.edu.sg; Mention mention@noreply.github.com 主题: Re: [yangxuntu/SGAE] Implement of fusion operation in MGCN (#16) @yangxuntuhttps://github.com/yangxuntu thank you,but i still do not quite understand.because in https://shiyaya.github.io/2019/03/16/SAGE-Auto-Encoding-Scene-Graphs-for-Image-Captioning/ ,In the image encoder, the input part includes v_r,the relation ROI feature,which is also 2048 dimension. As I know, the object ROI feature is obtained through bottom up attention.What about the RELATION ROI feature?Is it to combine the boxes of subjects and objects involved in a set of relationships, and then use bottom up attention to obtain the corresponding ROI feature?It seems that there is no relation ROI feature in the preprocessed file,should I check it again? ― You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<#16 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AJEJUORWU5DPU4BW6RBMBO3SENE4PANCNFSM4JQOT34Q.
"In the image encoder, the relation ROI feature, which is also 2048 dimension. What about the RELATION ROI feature? " could you please provide a pre-trained relation ROI feature 2048 dimension files. How about this performance added relation ROI feature? and how should I extract the V_r ROI feature from MOTIF easily? that's will help me a lot, thank you!
Could you please provide a pre-trained relation ROI feature 2048 dimension files?