mikel-brostrom/boxmot

Question about Evaluation on First 10 Frames

YoungjaeDev opened this issue · 3 comments

Search before asking

  • I have searched the Yolo Tracking issues and found no similar bug report.

Question

Hello,

I noticed in the benchmark notes that the evaluation was performed using only the first 10 frames of each MOT17 sequence. Could you please explain the rationale behind this decision? Specifically, I am curious to understand

  1. Why only the first 10 frames were chosen for evaluation instead of using the entire sequence.
  2. What benefits or insights are gained by limiting the evaluation to these initial frames.
  3. Whether there are any plans to extend the evaluation to include more frames or the entire sequences in future assessments.

Thank you for your time and clarification!

@mikel-brostrom

The --eval-existing option seems to have disappeared again, any idea how it was replaced?

Why only the first 10 frames were chosen for evaluation instead of using the entire sequence.

The only reason is the lack of time

What benefits or insights are gained by limiting the evaluation to these initial frames.

Nothing. It is actually detrimental, because the 2nd half is harder than the first half in general

Whether there are any plans to extend the evaluation to include more frames or the entire sequences in future assessments

I am planning to implement a new evaluation method based on pymotmetrics. In this process I will also run full evaluations

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!