About an error
jndxcaiwei opened this issue · 32 comments
Hello,I wonder I have installed pandas as required, when I run the example : poetry run p1204_3 test_videos/test_video_h264.mkv
it throw an error :ModuleNotFoundError: No module named 'pandas', that confused me a lot··· hope for your answer
Please show the exact steps that you have run.
In particular, you need to run poetry install
once before.
I did not run “poetry install”,instead I pip3 install all the python3 library required in the file named pyproject.toml.
(the conuntry where i am sometimes can not use pip quickly due to the pip server in the abroad,if i use poetry install ,it would throw an error :connection timeout, so i use pip3 install and use our country pip's mirror source)
Please use poetry install
to install the dependencies. There is an open issue wrt timeouts here: python-poetry/poetry#2200
If you must use pip directly, and all dependencies are available, instead of:
poetry run p1204_3 test_videos/test_video_h264.mkv
do
python3 -m p1204_3 test_videos/test_video_h264.mkv
I tried, but still failed to run poetry install directly, so also tried python3 -m p1204_3 test_videos/test_video_h264.mkv ,then throw an error too : /usr/bin/python3: No module named p1204_3.main; 'p1204_3' is a package and cannot be directly executed
could help me figure out what's matter with this?
when pip3 install ···, often throw "ModuleNotFoundError: No module named 'keyring.util.escape' "
It looks like there is an issue with your Python environment that is not directly related to this project. I am sorry, but I don't think we can be of further help here. About the keyring issue, does this help? https://stackoverflow.com/questions/53164278/missing-dependencies-causing-keyring-error-when-opening-spyder3-on-ubuntu18
that's work,thank you very much,but while run ”poetry run p1204_3 test_videos/test_video_h264.mkv”,it continues for a while, when finished, its throw
"ERROR:root:there was something wrong with test_videos/test_video_h264.mkv
ERROR:root:please check the following command:
/home/caiwei/bitstream_mode3_p1204_3/p1204_3/bitstream_mode3_videoparser/parser.sh "test_videos/test_video_h264.mkv" --output "./tmp/test_video_h264.json.bz2"
ERROR:root:no bitstream stats file for test_videos/test_video_h264.mkv
[
{}
]
INFO:root:store all results to reports"
and the folder "reports" is empty
What happens when you run:
/home/caiwei/bitstream_mode3_p1204_3/p1204_3/bitstream_mode3_videoparser/parser.sh "test_videos/test_video_h264.mkv" --output "./tmp/test_video_h264.json.bz2"
run the command only, it seems continue to analysis frames and when it done,show this
/home/caiwei/bitstream_mode3_p1204_3/p1204_3/bitstream_mode3_videoparser/parser.sh: line 7: 37502 Killed python3 "$scriptPath/PythonInterface/parse.py" "$@" --dll "$scriptPath/VideoParser/libvideoparser.so"
Could it be that your system is running out of memory while it is analyzing? The fact that the process is simply killed hints at some underlying system issue with the Python parser process.
what is the minimum memory required for the project?
For the sample file, it takes about 3.5 GB of virtual memory. The reason for that is simply the design of the bitstream parser, which is a proof of concept at the moment, and simply provided for reference purposes.
The reason it takes a lot of memory is that the statistics are assembled in-memory and later written to the output file (./tmp/test_video_h264.json.bz2) rather than being written out on the fly. The pseudo code is:
def parse(self):
"""
Parse the input file
"""
self.parser.init()
while True:
stats = self.parser.parse_next_frame()
self.collected_stats.append(stats)
# do something with the stats
I think we may be able to restructure this so that the stats are written frame-by-frame in an NDJSON file rather than one big JSON array. This will keep the memory footprint very small.
What do you think, @stg7?
Hello,i reinstalled ubantu with more memory and it delivered the result successfully,your advice is a great help,thank you very much!
May I ask a low-level question? Do you have a description of the results? I mean, which field represents the score of the video?
and is that the score higher, video quality is better?
I pushed some information on the results in the readme, see here: https://github.com/Telecommunication-Telemedia-Assessment/bitstream_mode3_p1204_3#usage
thank you very much!
I have another server questions for this project, if you have spare time , hope for your answer.
1.The most important also maybe the most low-level question: If i use my own test video, Does only need to use the project "bitstream_mode3_videoparser" to generate file name like “test_video_h264.json”, then put it where to use it? and only to modify the filename in “bitstream_mode3_p1204_3/tests/test_p1204_3.py ”?
- the file “”test_video_h264_feat.pkl“” used for what`···any influence to my own test video?
3.is this project only adapt to device like pc and mobile? because i have seen in ”bitstream_mode3_p1204_3/p1204_3/models/p1204_3” only have PC and mobile
I would appreciate your reply. Thank you very very much!!
1.The most important also maybe the most low-level question: If i use my own test video, Does only need to use the project "bitstream_mode3_videoparser" to generate file name like “test_video_h264.json”, then put it where to use it? and only to modify the filename in “bitstream_mode3_p1204_3/tests/test_p1204_3.py ”?
The p1204 code will take care of running the bitstream parser and generating the respective feature file (json.bz2). It will be put in tmp and read from p1204 to generate the model scores.
You do not need to modify the source code. You can simply call the p1204 binary with a different filename:
poetry run p1204_3 test_videos/some-other-video.mkv
the file “”test_video_h264_feat.pkl“” used for what`···any influence to my own test video?
No, it belongs to the test video with the name test_video_h264
.
.is this project only adapt to device like pc and mobile?
Yes.
Accoding to the document, this model seems excluding the influence of the network layer and only the media layer is considered,and it tells this model only adapt to reliable transport system(TCP/IP),is it to ensure the complete video frame?
If i use it in UDP/TCP system like video conference system with good network condition(no packet loss,frame is complete), OK it or not?
and what about the video source requirement?Secondary stream ,I mean,screen sharing in a video conference is that okkk?
If i use it in UDP/TCP system like video conference system with good network condition(no packet loss,frame is complete), OK it or not?
Assuming that all frames have been transmitted without errors, the model can be used, yes.
Note that the model really only covers the media layer, so the effects of network (stalling, freezing, delay) are not considered.
and what about the video source requirement?Secondary stream ,I mean,screen sharing in a video conference is that okkk?
The model has not been validated with desktop-type screensharing video, so we cannot tell you whether the predictions will be accurate for that case. If you want to get an estimation of the quality, for sure it can be used.
Can I consider this model is already trained well and as long as my video meets the requirements, then use it with no problem?
So far,is that the video captured only by the camera with all camera model can be used in this model,in addition to the virtual camera (not been verified yet, is it?)
You mention that network are not considered,does it mean that we cannot use this model if there are problems such as blurring and freezing in the test video?(low-level question····just want to confirm)
Can I consider this model is already trained well and as long as my video meets the requirements, then use it with no problem?
Yes, as long as the video fits the scope (resolution, frame rate, etc.).
So far,is that the video captured only by the camera with all camera model can be used in this model,in addition to the virtual camera (not been verified yet, is it?)
Yes, that should make no difference as long as the bitstream is H.264-conformant.
You mention that network are not considered,does it mean that we cannot use this model if there are problems such as blurring and freezing in the test video?(low-level question····just want to confirm)
Freezing can not be handled by the model. If frames are frozen in the player, this cannot be captured. Repeated frames shown as duplicated frames in the bitstream will get basically the same score and not be considered an error.
Blurring due to reduction of resolution will be handled in the model.
I noticed that the table ”Video test factors for which the model has been validatedvideo” in the article says that aspect ratio is 16:9, does it mean input video's aspect ratio must be 16:9 or just mean the video's aspect ratio you only vaildated is 16:9?
What if it's a video with the required resolution, but it's scaled?is that ok?
And also the table about ”display resolution and frame rate” : PC/TV 2 160 p, up to 60 frames/s && MO/TA 1 440 p, up to 60 frames/s; But you also list ”Video resolution and bitrate” requirement,Does it mean same with what i said above:just you validated “PC/TV 2 160 p, up to 60 frames/s && MO/TA 1 440 p, up to 60 frames/s”?and if only video meet the requirement is ok?
does it mean input video's aspect ratio must be 16:9 or just mean the video's aspect ratio you only vaildated is 16:9?
We only validated 16:9 but it can be assumed that the model is valid for other reasonable aspect ratios.
Does it mean same with what i said above:just you validated “PC/TV 2 160 p, up to 60 frames/s && MO/TA 1 440 p, up to 60 frames/s”?and if only video meet the requirement is ok?
Yes, if it says "up to", anything equal to or lower than that will be covered by the model.
Hello, I am back.
Now I wanna trying use this model to test the video quality of video conference system ,according to the guidance ,I should use the decoded video from the receiver and put it into the model for testing,This step should be correct,Right?
The receiver is a PC, so it can only get the received video through the screen recording(PC resolution is 3840x2160), and then send the video to the model for detection. Will this have an impact of the video quality? or there is a better way to get video?
thanks for your guidance!
You cannot use a screen recording as model input. That would create invalid predictions of quality.
The model requires the original bitstream, as transmitted via the network. The bitstream can be acquired for example by using a traffic capture program.
like packet capture software on the decoding side,Wireshark?
but seen the examples given are all videos ,Can I consider it that you capture the H264 bitstream transmitted via the network, package it and send it to the model?
And the influence of decoding side is not considered?
thank you very much!
Yes, you will probably have to capture the raw H.264 bitstream and multiplex it into a container that is readable by ffmpeg, so that the bitstream parser can read the frame statistics.
The influence of decoding is not considered, as the model uses the transmitted bitstream only. This is different from, say, No-Reference or Full-Reference models that use the decoded pixels.
I think maybe can use “video capture card” to capture the decoded video and the video will automatically multiplex into a container? this way may will be affected by decoding,but just make sure all frame is completed ,that's will be ok?
thank you very much
You cannot capture the decoded video and use that as input for the model, since then the quality score will reflect the quality of the capture card/the recorded video, and not the video that was transmitted.
It depends on your architecture, but you will need to capture the H.264 bitstream as it comes in to the client.
YES,I understand what you mean.
I would like to ask one more question,is multiplex bitstream into a container necessary?
Without checking in detail, yes, I think it would be better to multiplex it into a container so that the framerate of the video can be properly estimated.