princeton-vl/RAFT-Stereo

Question about the result on middlebury

Opened this issue · 1 comments

Hi, Thanks open source for such a great job

I tried eval on MiddleEval3 training dataset with your model raftstereo-middlebury.pth , but the result is poor than scoreboard showed on https://vision.middlebury.edu/stereo/eval3/. How can I get the same precision on the website ?

I used default parameters in evaluate_stereo.py, The command used and the result are shown below.
image

These results you generated are the same that were submitted to the middlebury training dataset scoreboard. The difference is in the evaluation, which I believe prioritizes difficult image regions like those "with fine detail and/or lack of texture."

See: https://vision.middlebury.edu/stereo/eval3/MiddEval3-newFeatures.html