hongshuochen/DefakeHop

About Model Size / Saving Model

Opened this issue · 3 comments

I wanted to ask about model size which you got after training Celab and FF++ datasets. I wanted to save the model and then use it for single prediction, but as I understood for prediction we need to save the "classifier" and "defakeHop" objects. However defakeHop object size depends on the training data. In results I have 10GB defakeHop and 760 KB classifier. May I did something wrong? How would you save the model for the future prediction? If you have some time could you explain it to me?

Hi! This is a good question! I think for some reason I save the features in the model. It should be set to an empty dictionary after training. Maybe I forget to clean it. Therefore, the model size depends on the data size. Let me double-check and get back to you! Thank you!

self.features = {}

self.features = {}

I found a solution to this problem.

In "multi_cwSaab.py" file.
Line 17: self.tmp = []
Line 107: self.tmp.append((saab_id,channel_id,self.max_pooling(output)))

Must be removed as we are simply collecting information for no purpose.

Hello! are there any updates on saving the model? and using it to predict individual video files?