Testing monochromatic images on your code
iucurmak opened this issue · 6 comments
How can i apply to testing your code for monochromatic images. Is it possible? Or what i need to change from your code? Could you please let me know?
The network is not trained for monochromatic images. In theory, you can feed it a monochromatic image by repeating it across the channel dimension. Meaning if you have a monochromatic image x
with shape (H, W, 1)
, you can do tf.repeat
it into (H, W, 3)
. The resulting bitrate will however be suboptimal, since the network is not trained to leverage the redundancies now.
In theory you could train a new network however, where the input and output is (H,W,1)
dimensional instead.
Thank you for informing me. Which files i need to rearrange if i train the code for monochromatic images? And also which files need rearrangement for testing code?
Probably requires quite some changes -- essentially everywhere where the code assumes a channel dimension of 3. Some locations:
- In
inputpipeline.py
theimages_decoded
and_preprocess
functions - In
autoencoder.py
theget_mean_var
function (must calculate new mean/variance or just use average of the means/variances there)
Starting from the changes in inputpipeline.py
, you can probably run it and follow the errors.
Appreciate for your help. Have great day.
Let me know if you manage to train a model for monochromatic, would be interesting!