ML4GW/DeepClean

Parameterize autoencoder architecture

Opened this issue · 1 comments

The current implementation of the autoencoder architecture make picking the depth dynamically difficult because the paddings in the inverse transposes and the padding/kernel size in the last convolution layer need to be picked just-so in order that the output length magically matches the input length. This makes exploring depth or parameter count automatically as a hyperparameter more difficult.

One solution to this might be to do dynamic shape inference on the output of the transpose convolutions, from which point it should be (I think) reasonably straightforward algebra to infer the length of kernel required in the last convolution to make the lengths work out. Keras would have this functionality, but unfortunately with Torch I think we'll just have to work out the math ourselves (though I'd LOVE to be proven wrong here).

Potential link here with #31 to parameterize a model by its target bandwidth, but maybe this is going a bit crazy