你好,请问这个项目有对应的论文吗
Closed this issue · 8 comments
另外请问能上传pytorch版本的代码吗,感激不尽
Please reference A light-weight stereo matching network with color guidance refinement. I am not certain if I have a PyTorch backup from a project 4 years ago. If available, I will update in free time.
Please reference A light-weight stereo matching network with color guidance refinement. I am not certain if I have a PyTorch backup from a project 4 years ago. If available, I will update in free time.
谢谢前辈的及时回复,我已经下载了论文。如果有pytorch代码就更好了,没有也没办法,再次感谢!
Please reference A light-weight stereo matching network with color guidance refinement. I am not certain if I have a PyTorch backup from a project 4 years ago. If available, I will update in free time.
前辈,抱歉多有打扰,我在pytorch下复现了您的工作,仍有些问题想要请教,kitti2015的3PE我测得的是3.95,比论文中的2.68要高不少,kitti2012和论文中很接近。另外3060显卡测得的生成视差图时间是0.16s左右,与文中2080ti下0.04s差距很大。请问可能是哪里出问题了?
- For the issue of inference time, on the one hand, the performance of the 3060 is still a gap from the 2080Ti or Titan XP. Additionally, the inference time should not include data loading and post-processing.
- Have you tried to use the Paddle version of the code and provided model checkpoint to inference KITTI2015 directly? I recommend doing this first to get correctly FPS and accuracy.
- For the issue of inference time, on the one hand, the performance of the 3060 is still a gap from the 2080Ti or Titan XP. Additionally, the inference time should not include data loading and post-processing.
- Have you tried to use the Paddle version of the code and provided model checkpoint to inference KITTI2015 directly? I recommend doing this first to get correctly FPS and accuracy.
好的前辈,我会在paddle环境下实验的。
另外还想请教一个问题,我看到您论文中提到第四阶段视差细化的部分,只对模型的总时间增加了大约2ms。因此我将您的视差细化模块替换到anynet网络中,精度变好了,但推理时间增加了大约80ms,我很吃惊,请问可能是我哪里做的有问题?非常感谢!
- For the issue of inference time, on the one hand, the performance of the 3060 is still a gap from the 2080Ti or Titan XP. Additionally, the inference time should not include data loading and post-processing.
- Have you tried to use the Paddle version of the code and provided model checkpoint to inference KITTI2015 directly? I recommend doing this first to get correctly FPS and accuracy.
前辈,可是我看其他项目的代码,他们的预测时间都加上了前后处理所需要的时间呀,类似下面这种
def test(imgL,imgR):
model.eval()
if args.cuda:
imgL = imgL.cuda()
imgR = imgR.cuda()
with torch.no_grad():
output = model(imgL,imgR)
output = torch.squeeze(output).data.cpu().numpy()
return output
...........................
start_time = time.time()
pred_disp = test(imgL,imgR)
print('time = %.2f' %(time.time() - start_time))
- As far as I know, Anynet should have higher efficiency during the inference process because they have implemented CUDA operators at the lower level to accelerate computations. Similarly, different versions of PyTorch can also affect inference efficiency.
- In my view, inference time refers to the time it takes for a neural network to perform a single forward pass, and for some task e.g. object detection, it should also include the post-process time like NMS which whould significantly effect the model accuracy.
- If calculating the entire pipeline time, it should include data loading and post-processing, as preprocessing and postprocessing are often data-related but independent of the model. For regular model inference, the sizes of data input and output are usually fixed.