A question about training
junjie1003 opened this issue · 4 comments
junjie1003 commented
Could you please tell me the difference between the first “config.pre_train=True” and the later “config.pre_train=False”? Thank you!
MehmetAygun commented
True one doesnt use instance loss for training while false one uses all of them. In my experiments, training firstly backbone then finetuning improved results.
junjie1003 commented
Mehmet Aygün ***@***.***>于2021年7月19日 周一14:40写道:
True one doesnt use instance loss for training while false one uses all of
them. In my experiments, training firstly backbone then finetuning improved
results.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#4 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANRGHJGWEDNKLSUGXQ73QX3TYPCEZANCNFSM5ASYWBBA>
.
Thank you for your response. And I still have a question. Do you finetune
the whole network? Or only finetune four decoder after the encoder-decoder
architecture?--
Junjie Yang, College of Intelligence and Computing, Tianjin University
MehmetAygun commented
I finetuned all of them. The false option doing that.
junjie1003 commented
Mehmet Aygün ***@***.***>于2021年7月20日 周二00:54写道:
I finetuned all of them. The false option doing that.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#4 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANRGHJCATSDWPU5SHSJLA33TYRKFVANCNFSM5ASYWBBA>
.
Thank you very much!--
Junjie Yang, College of Intelligence and Computing, Tianjin University