ELIFE-ASU/INNLab

How to train an INN?

Closed this issue · 1 comments

Dear authors,

Thank you for creating such a great project. I would like to know how I can train an INN? Is the training process the same as normal pytorch?

Best,
Lin

Although I'm trying my best to align the functions and layers to original PyTorch, there are still some key differences:

  1. If you want to control the output distribution, you need to train Jacobian determinate at the same time. For all of my methods in the INNLab, the output will have three parts: f(x), log(p), and log(det(J)). the log(p) is the log probability for abandoned elements in the network if you using some resize layers. And the log(det(J)) is the Jacobian determinate. When you training the network, you also want to maximize log(p)+log(det(J)). For more details, see: Dinh, Laurent, Jascha Sohl-Dickstein, and Samy Bengio. “Density Estimation Using Real NVP.” ArXiv:1605.08803 [Cs, Stat], February 27, 2017. http://arxiv.org/abs/1605.08803.

  2. If you don't have any requirements on output distribution, then you can simply use is as normal NN;

  3. However, if you are using INN.JacobianLinear, the invertible property is not guaranteed if you don't train Jacobian. This is because log(det(J)) > -Inf of matrix is the condition of invertibility.