twairball/keras_lstm_vae

Model Latent Layer Dimensionaliy?

Hamcastle opened this issue · 1 comments

Hi,

The default config for the vae has an intermediate layer size of 32 and a latent layer size of 100. The example data are 3*13, so the input dimension is 39. Is there some reason that unlike most autoencoders, the "bottleneck" middle layer is larger than the input? There are auto-encoder architectures that do this, but usually they require weight regularization somewhere in the layer sequence.

no real reason - it just follows defaults from the keras VAE example.

The bundled example is probably not a good example as per your point...