sunshineatnoon/PytorchWCT

Training the network on my own

rajatkb opened this issue · 1 comments

Just wanted to clarify. The network is an autoencoder that uses features representation of the VGG network. So if I am to build this model on my own and train the network instead of using the pretrained one, all I have to do is train the autoencoder with X = Train image and Y = Train image . With no transformation In between applied. Is that it ? Because It looks like each of the Blocks of the VGG have this decoder put in between them . So when training do I stack all the (encoder=><=decoder)_1 ==> (encoder=><=decoder)_2 ==> (encoder=><=decoder)_3 without the WCT applied. And when inferencing I apply the transformation.

Yes, you train each auto-encoder separately. Actually, I didn't train the auto-encoders myself, I directly convert the pre-trained torch ones to pytorch.