zhaw/neural_style

mrf-cnn differs from the paper

Closed this issue · 1 comments

when review the code of mrf_cnn, i found the nearest patch(target_patch0, target_patch1) are fixed for each scale but not update them during each iteration. I suppose it is different from the paper, though it makes the iterations much faster. Is that because the modification would not harm the performance or me misunderstood the original paper?
great work with mxnet, btw.

zhaw commented

I think I misunderstood the paper. In paper it says we initialize output with random noise and I thought if we update patch assignment according to output, then the patches are assigned randomly during the first iteration and will lead to divergence. That got me confused and I thought maybe patches are fixed according to content image. I tried to read the original code written by the author but I'm not familiar with Lua..
Thanks for pointing that out! I will fix this later.