jongchyisu/mvcnn_pytorch

how to concatenate depth image and shaded image?

Closed this issue · 4 comments

Hello, I would like to know how to concatenate depth image and shaded image? Is it just adding the depth image as alpha channel into the shaded image and processing, or training separately and combining at the view-pooling layer?

I'm actually concatenating the input as 6 channels, but I guess it's similar to using the alpha channel, just note that I'm using the imagenet pre-trained model so the first layer needs to be averaged over channel if you do so.

thank you very much : )

Can you explain it in more detail? I tried adding the depth map into the alpha channel and changing the in_channel of the first Conv2d layer from 3 to 4, but the result was worse than the 3-channel RGB map.

I append the depth as another RGB image, so the input is 6-channel. The weights of the first layer are replicated along the channel (if using the pre-trained model).