This is the implementation of the paper "CSVideoNet: A Recurrent Convolutional Neural Network for Compressive Sensing Video Reconstruction"(https://arxiv.org/abs/1612.05203).
- 0.25_196_ExtractFrame.sh is used to extract frames from videos.
- generateTrainCNN.m is used to generate image blocks for training CNN1(Background CNN).
- "caffe/model/crX" directory contains all the necessary file for training CNN1.
- extractFeatures_5_25.m is used to extract the intermediate feature extracted by CNN1.
- "model" directory contains all the files for training the whole framework. The pre-trained CNN1 is loaded, further trained with the whole framework using the intermediate feature as input. The original image blocks is the input for training CNN2, CNN2 is trained from scratch.
- Run VideoNet.lua to train the whole framework.