This project brings ViTMatte - Boosting Image Matting with Pretrained Plain Vision Transformers to The Foundry's Nuke.
ViTMatte is a natural matting neural network that can pull high-quality alphas from garbage mattes (trimaps).
This implementation wraps ViTMatte into a single Inference node in Nuke, removing complicated external dependencies and allowing it to be easily installed on any Nuke 14+ system running Linux or Windows.
While ViTMatte works best on still images and doesn't have temporal stability, it can still be helpful for pulling difficult mattes, especially those with fine details like hair and fur.
VITMatte_demo_001.mp4
Nuke 14.0+, tested on Linux and Windows.
- High quality natural matting results
- Moderate memory requirements, allowing 2K and 4K frame sizes on modern GPUs (12GB or more).
- Fast, less than one second per frame (2K).
- Commercial use license.
- Download and unzip the latest release from here.
- Copy the extracted
Cattery
folder to.nuke
or your plugins path. - In the toolbar, choose Cattery > Update or simply restart Nuke.
ViTMatte will then be accessible under the toolbar at Cattery > Matting > ViTMatte.
Due to file size limitations, the Cattery (ViTMatte.cat)
and TorchScript (VitMatte.pt)
models need to be downloaded from an external server:
https://drive.google.com/file/d/1bXqdh4dD8bVpSEuNFk50WOEp12I2I4S4/view?usp=sharing
ViTMatte.cat is licensed under the MIT License, and is derived from https://github.com/hustvl/ViTMatte.
While the MIT License permits commercial use of ViTMatte, the dataset used for its training may be under a non-commercial license.
This license does not cover the underlying pre-trained model, associated training data, and dependencies, which may be subject to further usage restrictions.
Consult https://github.com/hustvl/ViTMatte for more information on associated licensing terms.
Users are solely responsible for ensuring that the underlying model, training data, and dependencies align with their intended usage of RIFE.cat.
@article{yao2024vitmatte,
title={ViTMatte: Boosting image matting with pre-trained plain vision transformers},
author={Yao, Jingfeng and Wang, Xinggang and Yang, Shusheng and Wang, Baoyuan},
journal={Information Fusion},
volume={103},
pages={102091},
year={2024},
publisher={Elsevier}
}