scalability
Char-Aznable opened this issue · 1 comments
Hi @nicola-decao very nice work! I am thinking of using BNAF to do variational inference where the posterior is over a few thousand to tens of thousands dimensional space. I wonder if the current implementation can scale up to that many dimension. My concern is that the model might not fit into the GPU memory. Can you provide an estimate of the space complexity a given architecture consisting of, say, n
stacked flows of m
hidden layers each? I know you gave an estimate of number of parameters in table 2 in the paper but how does that translate into memroy requirement? I appreciate your insight in this because I am more of a tensorflow person so trying this out in pytorch will likely take me a while. Thanks in advance!
Hi @Char-Aznable, a single flow block have space complexity of O(m * k^2)
where k
is your data dimensionality and m
the number of hidden layers. So for n
stacked flows if would be O(n * m * k^2). I hope it helps.