Improve Processing time per frame in video while testing
xanthan011 opened this issue · 1 comments
Firstly, amazing work by the contributors.
I have installed the fer library on google colab (through pip).
I wanted to know if there was a way to improve the processing time per frame, my aim is to reduce the processing time while testing, let's say 4 videos at once.
I have already tired multithreading and multiprocessing, both of the methods don't seem to reduce the time for processing. I understand that your model sees each and every frame of the video that is sent, but is there a way to make it parallelly run on more than 1 video so as to reduce the overall execution time?
A sample code of threading that I tried to implement is given below:
import threading
from fer import Video
from fer import FER
import matplotlib.pyplot as plt
import os
import sys
def funk(video_name):
try:
videofile = video_name
# Face detection
detector = FER(mtcnn=True)
# Video predictions
video = Video(videofile)
# Output list of dictionaries
raw_data = video.analyze(detector, display=False)
except Exception as e:
print(f"In video {video_name} there was an error: \n {e}")
videos = ["a","b","c","d"]
for each in videos:
t2 = threading.Thread(target = funk, args = [each])
t2.start()
for x in threads:
x.join()
If anything can be improved in this code in order to reduce the execution time then please let me know. Other methods are also welcome.