Results in MSU Video Quality Metrics Benchmark
Opened this issue · 1 comments
Hello! We have recently launched and evaluated this algorithm on the dataset of our video quality metrics benchmark. The dataset distortions refer to compression artifacts on professional and user-generated content. Method took 3th place on the global leaderboard and 1th place on the no-reference-only leaderboard in the terms of SROCC. You can see more detailed results here. If you have any other video quality metric (either full-reference or no-reference) that you want to see in our benchmark, we kindly invite you to participate. You can submit it to the benchmark, following the submission steps, described here.
Wow~Nice work on the VQA benchmark dataset! Congratulations for the acceptance of this work by NeurIPS Datasets and Benchmarks Track. Thank you for the effort and thanks for reporting the result of MDTVSFA (also the other two methods, lidq92/LinearityIQA#26 (comment) and lidq92/VSFA#47 (comment)) on this benchmark dataset. If we have a new advanced metric, we will submit it to the benchmark.
Hello! We have recently launched and evaluated this algorithm on the dataset of our video quality metrics benchmark. The dataset distortions refer to compression artifacts on professional and user-generated content. Method took 3th place on the global leaderboard and 1th place on the no-reference-only leaderboard in the terms of SROCC. You can see more detailed results here. If you have any other video quality metric (either full-reference or no-reference) that you want to see in our benchmark, we kindly invite you to participate. You can submit it to the benchmark, following the submission steps, described here.