why training using original CSPN implementation and inference using accelerated CSPN
dongliangcao opened this issue · 1 comments
dongliangcao commented
Hi, thanks for your great work! I am wondering why during the training, the original CSPN is used. Because as you mentioned in the paper, the new implemented one is much faster.
JUGGHM commented
Thanks for your interest! Actually the accelerated CSPN is originally designed for dilated CSPN as the sample and stitch steps are time consuming. And the introduced unfolding operation is time saving yet GPU consuming. So here is the answer: the original CSPN is more friendly for devices with smaller GPU memory.