vision4robotics/TCTrack

How tor make the pretrained wights of backbone?

HasilPark opened this issue · 1 comments

Can you share the source or how to make pretrained wights of backbone?

Because we use the TAdaConv, our backbone has the learnable weight which is the same as the original CNN backbone. Our goal is to get a backbone whose output is common to the original CNN backbone before training. Therefore, each layer of pretrained weight of our backbone is totally the same as the corresponding layer of the original CNN backbone which can be found in our paper. During the code, we 1. copy the original parameters and biases and put them in our backbone and 2. initiate the calibration vector to make sure the output of our backbone is the same as the original version before training.

This is an example of our initialization.

ours=TemporalAlexNet()
original=AlexNet()

load_pretrain(original, 'alexnet-bn.pth') # alexnet-bn.pth is the pretrain model on Imagenet

t.save(original.layer1.state_dict(),'1.pth') # save every layer parameters
t.save(original.layer2.state_dict(),'2.pth')
t.save(original.layer3.state_dict(),'3.pth')
t.save(original.layer4[1].state_dict(),'4.pth')
t.save(original.layer5[1].state_dict(),'5.pth')

ours.block1.load_state_dict(t.load('1.pth')) #load the parameters to our model
ours.block2.load_state_dict(t.load('2.pth'))
ours.block3.load_state_dict(t.load('3.pth'))
ours.b_f1[0].load_state_dict(t.load('4.pth'))
ours.b_f2.load_state_dict(t.load('5.pth'))

t.save(original.layer4[0].weight,'4-1.pth') # save the initial weight
t.save(original.layer4[0].bias,'4-2.pth')
t.save(original.layer5[0].weight,'5-1.pth')
t.save(original.layer5[0].bias,'5-2.pth')

ours.temporalconv1.weight[0,0,:,:,:,:]=t.load('4-1.pth') #load the initial weight
ours.temporalconv1.bias[0,0,:]=t.load('4-2.pth')

ours.temporalconv2.weight[0,0,:,:,:,:]=t.load('5-1.pth')
ours.temporalconv2.bias[0,0,:]=t.load('5-2.pth')

t.save(ours.state_dict(),'temporalalexnet.pth')