Extracting metrics for any randomly selected video frame
nav9 opened this issue · 6 comments
There are three problems for which I'm requesting these:
- During normal use if I try to get VMAF scores using the various parameters for VMAF, the program just hangs. This was on MacOS 12 and the video was a
wmv
. I no longer have access to that Mac, so I can't specify exactly what was tried. What I remember was that I wrote a Python program that importedffmpeg-quality-metrics
and used it to calculate scores. - I have a Python program for which I need to randomly sample multiple video frames from multiple videos and get the quality metrics of those frames. It'd be great if
ffmpeg-quality-metrics
supported getting the quality of such randomly selected frames (currently it appears to support only displaying quality metrics for the entire video. - The examples shown on pypi are inadequate. In the
API
section, it's necessary to show more code examples which demonstrate how to obtain the various scores.
The author of this project had requested in the comments here that an issue be created.
ps: I intend to add such quality metrics display in this project too.
Please post one issue per problem. These are three distinct issues.
During normal use if I try to get VMAF scores using the various parameters for VMAF, the program just hangs. This was on MacOS 12 and the video was a wmv. I no longer have access to that Mac, so I can't specify exactly what was tried. What I remember was that I wrote a Python program that imported ffmpeg-quality-metrics and used it to calculate scores.
If I don't have more information I will not be able to help with that, sorry.
I have a Python program for which I need to randomly sample multiple video frames from multiple videos and get the quality metrics of those frames. It'd be great if ffmpeg-quality-metrics supported getting the quality of such randomly selected frames (currently it appears to support only displaying quality metrics for the entire video.
This is not supported and would depend on being able to accurately seek to a given target frame. I can investigate whether this is possible.
The examples shown on pypi are inadequate. In the API section, it's necessary to show more code examples which demonstrate how to obtain the various scores.
This is free and open source software with no guarantee for support and feature-completeness. You are more than welcome to either a) check the source code, b) check the tests to see what can be done with the API, or c) just run the shown command and print the result. It appears to me that you have not actually tried to run the example? You would have seen the metrics directly in the output dictionary.
In any case I've slightly updated the documentation to provide more info.
PS: Don't get me wrong – I appreciate constructive input to make this project better, but this post feels to me like a list of requests that don't show any effort. I will open a dedicated issue for per-frame metrics.
I agree, I posted this in haste, when I realized that it had been long since you requested me to create an issue. However, there was no lack of effort. I had got the program working, which is when I encountered the problem with VMAF. And now I am no longer working with the client who gave me the Mac, so I can't run it on the Mac. Thanks for updating the example.
I understand what you wrote about open source. I've written about the spirit behind it. However, the perspective to look at it from, is not the "I'm sorry I can only put in this much time into a free software" approach. You've spent a phenomenal amount of effort in building this amazing library. The perspective to look at it is that all of your effort would go un-appreciated if people reach the landing page, try to use the software, get stuck with it and just move on without using it, if it's difficult to figure out how to use it. Even my professors used to advise us that all the research we did and all the effort we put into writing our thesis would go un-appreciated if we prepared a poor presentation while defending our thesis.
New issue: When running the updated example on PyPi, I get this warning /home/navin/.pyenv/versions/3.10.1/lib/python3.10/site-packages/numpy/core/_methods.py:233: RuntimeWarning: invalid value encountered in subtract x = asanyarray(arr - arrmean)
on the line print(ffqm.get_global_stats()["ssim"]["ssim_y"]["average"])
. The video I'm running is from this page, (the ref file sample_960x400_ocean_with_audio.mkv), and the distorted file is the same file converted to mp4
using ffmpeg
.
About VMAF, I'll try again and reach out, by posting it in a separate issue if necessary.
Also, in the line ffqm = FfmpegQualityMetrics("path/to/ref", "path/to/dist")
, it'd help to mention it as ("path/to/referenceVideo", "path/to/distortedVideo")
. Of course, a person could look at the source code and figure it out from the documentation you wrote, but it's a lot more helpful to mention it so that people wouldn't have to do so. Same reason I'm verbose with this readme.
If you'd like to include a link to this answer (I wrote it) about how to build ffmpeg with vmaf, it'd help users who don't know how to do it.
I tried running the vmaf commands on Ubuntu, and it worked fine. I think the problem I encountered earlier on the Mac would have been that since the video was large, it took too long to compute vmaf, and hence it just appeared to hang.
This is all the more reason why it'd help to have a feature that allows us to sample any frame or portion of a video and calculate the scores only for that portion, instead of doing it for the entire video. Such a functionality would help for programs like this, where the objective is to figure out video quality while trying out multiple ffmpeg parameters. It'd be impractical for such a program if vmaf score calculation takes too long to compute per video.
Thank you very much for responding, and thank you for creating this wonderful library.
The perspective to look at it is that all of your effort would go un-appreciated if people reach the landing page, try to use the software, get stuck with it and just move on without using it, if it's difficult to figure out how to use it.
Thanks for explaining; I can appreciate that perspective. If you have any constructive suggestions for improvement though, the best way to accomplish that would be through a Pull Request.
When running the updated example on PyPi, I get this error /home/navin/.pyenv/versions/3.10.1/lib/python3.10/site-packages/numpy/core/_methods.py:233: RuntimeWarning: invalid value encountered in subtract x = asanyarray(arr - arrmean) on the line print(ffqm.get_global_stats()["ssim"]["ssim_y"]["average"]). The video I'm running is from this page, (the ref file sample_960x400_ocean_with_audio.mkv), and the distorted file is the same file converted to mp4 using ffmpeg.
Did you get a warning only, or was that an error? In any case, please post a new issue for this and fill out the issue template. Ideally, supply the samples you used, including the ffmpeg command to produce the distorted file. It will be hard to help otherwise.
Also, in the line ffqm = FfmpegQualityMetrics("path/to/ref", "path/to/dist"), it'd help to mention it as ("path/to/referenceVideo", "path/to/distortedVideo").
Will fix this.
If you'd like to include a link to this answer (I wrote it) about how to build ffmpeg with vmaf, it'd help users who don't know how to do it.
I don't really think it's necessary. All the static builds I linked to in the README are compiled with libvmaf
. If users want to build ffmpeg manually with libvmaf support, the official wiki has instructions for that, and there are other scripts that work for Windows.
In my experience, linking to official resources and docs is preferable because the build steps might be changed, and you'd have to go back and fix your Stack Overflow answer.