video_features
allows you to extract features from raw videos in parallel with multiple GPUs.
It supports several extractors that capture visual appearance, optical flow, and audio features.
See more details in Documentation.
# clone the repo and change the working directory
git clone https://github.com/v-iashin/video_features.git
cd video_features
# install environment
conda env create -f conda_env_torch_zoo.yml
# load the environment
conda activate torch_zoo
# extract r(2+1)d features for the sample videos
python main.py \
feature_type=r21d \
device_ids="[0]" \
video_paths="[./sample/v_ZNVhz7ctTq0.mp4, ./sample/v_GGSY1Qvo990.mp4]"
# use `device_ids="[0, 2]"` to run on 0th and 2nd devices in parallel
Action Recognition
Sound Recognition
Optical Flow
Image Recognition
Please, let me know if you found this repo useful for your projects or papers.
I would be happy to consider your ideas and PRs. Here is a few things I have in mind:
- Docker image supporting all models
- PyTorch DDP support (for multi-node extraction)
- Refactor the code base with OOP in mind – now the code is a bit redundant for each feature
- More models, of course