output sample images are incorrectly normalized
jcpeterson opened this issue · 6 comments
jcpeterson commented
Some of the sample images looked washed out. I suspect that the min/max pixel values of the real samples versus the generated ones are different. It fluctuates wildly on every output. This makes it hard to verify quality during training most of the time.
jcpeterson commented
Here's a quick hack to fix:
half = samples.shape[1] / 2
samples[:,:half,:] = samples[:,:half,:] - np.min(samples[:,:half,:])
samples[:,:half,:] = samples[:,:half,:] / np.max(samples[:,:half,:])
samples[:,half:,:] = samples[:,half:,:] - np.min(samples[:,half:,:])
samples[:,half:,:] = samples[:,half:,:] / np.max(samples[:,half:,:])
jcpeterson commented
although I also removed the white spacing
github-pengge commented
Thx.
jcpeterson commented
this doesn't seem to be fully working for some reason. not sure why
github-pengge commented
So do I. Cannot figure out why.
jcpeterson commented
Something like this seems to remove outlier values and fix the problem:
half = samples.shape[1] / 2
sd_fake = np.std(samples[:,:half,:])
m_fake = np.mean(samples[:,:half,:])
margin = m_fake + (sd_fake*4)
samples[np.where(samples[:,:half,:] > margin)] = margin
samples[np.where(samples[:,:half,:] < -margin)] = -margin