why the model in the master branch gives me better results than model2 branch ?
Closed this issue · 2 comments
@abskj thank you so much for sharing this code.
I tried both codes in the 2 branches but autoencoder model in the master branch with less number of hidden layers gives me good images quality when decoding and it is better than model2, why this happen with me and what is the reason for that? is it normal ?
should I train the model only on png images ? or can I use both png and jpg?
I think that would have to do with training. Model with more parameters would have to be trained more in general and also is more prone to overfitting. When we did the project we did not have access to good GPUs and hence trained it on google colab and even Kaggle which gave us powerful GPU but stopped the process after limited time. I think both of the models have the potential to be more efficient. Adequately trained I would bet the more parameterized model would give better performance.
You would need to add support to read jpg images and then it should work without any issue as we are ultimately making use of 3 channel image data.