pkuxmq/Invertible-Image-Rescaling

The formula in paper seems not the same shown in code?

JuZiSYJ opened this issue · 4 comments

It is a great job!
The inverse-operation is quite novel for me,I notice that in the paper, the formula is

hl+1 = h | exp( (hl 2)) + φ(hl 2);
hl+12 = hl 2 exp(ρ(hl 1+1)) + η(hl 1+1)

However,the code is

if not rev:
y1 = x1 + self.F(x2)
self.s = self.clamp * (torch.sigmoid(self.H(y1)) * 2 - 1)
y2 = x2.mul(torch.exp(self.s)) + self.G(y1)

the exp operation in h1 seams not work?

Besides, ,Could please explain the jacobean value mean?I couldn‘t got it:)

Please refer to the details in Section 3.2 InvBlock.
The jacobian is the log det of the jacobian matrix. It is not used in our work. Details please refer to other works regarding invertible neural networks, e.g. RealNVP.

Please refer to the details in Section 3.2 InvBlock.
The jacobian is the log det of the jacobian matrix. It is not used in our work. Details please refer to other works regarding invertible neural networks, e.g. RealNVP.

Thanks for the reply, but I can not find the exp operation in y1.....

Besides, the channel_split_num always be 3 in InvBlock, but the group Conv does not produce the low frequency in the first 3 channels in output, it is in every first layer in subset group out...

y1 are the low-frequency parts and we employ additive transformation without exp, as illustrated in the InvBlock paragraph and Fig. 2.
Low frequency parts w.r.t. the original input are always the first 3 channels, and the rest are high frequency contents (e.g. low frequency parts of high frequency contents, or high frequency parts of low and high frequency contents.)

Thanks a lot, I ignore that you change the output after Haartransform, it is right