FF-Net implementation is inconsistent to the description in TABLE 1.
songkq opened this issue · 2 comments
songkq commented
Hi, @MinZHANG-WHU
As for the conv_f
layer of FF-Net
, it is composed of Conv+ReLU
in TABLE 1
in the paper.
However, FF-Net implementation didn't include the ReLU operation. Does it matter?
n.concat_1 = self.concat(n.data_t12, n.fd_1, n.up_2, n.up_3)
n.conv_t = self.conv(n.concat_1, 3, self.ff_channel, stride=1, pad=1,
name_w="conv_t_w", name_b="conv_t_b",
lr_mult_w=1, lr_mult_b=1,
decay_mult=1, bias_term=True)
n.conv_prob = self.conv(n.conv_t, 1, 1, stride=1, pad=0,
name_w="conv_prob_w", name_b="conv_prob_b",
lr_mult_w=1, lr_mult_b=1,
decay_mult=1, bias_term=True)
n.sig = L.Sigmoid(n.conv_prob, in_place=False)
MinZHANG-WHU commented
Sorry for the confusing. This is a careless mistake. Actually FF-Net implementation didn't include the ReLU layer.
songkq commented
OK.