/DCAM

Dual-context aggregation for universal image matting

Primary LanguagePythonApache License 2.0Apache-2.0

Dual-Context Aggregation for Universal Image Matting (DCAM)

Official repository for the paper Dual-Context Aggregation for Universal Image Matting

Description

DCAM is a universal matting network.

Requirements

Hardware:

GPU memory >= 12GB for inference on Adobe Composition-1K testing set.

Packages:

  • torch >= 1.10
  • numpy >= 1.16
  • opencv-python >= 4.0
  • einops >= 0.3.2
  • timm >= 0.4.12

Models

The model can only be used and distributed for noncommercial purposes.

Quantitative results on Adobe Composition-1K.

Model Name Size MSE SAD Grad Conn
DCAM 181MiB 3.34 22.62 7.67 18.02

Quantitative results on Distinctions-646. It should be noted that the matting network uses the texture difference between the foreground and the background on the Distinctions-646 dataset as a prior for prediction, which may fail on real images.

Model Name Size MSE SAD Grad Conn
DCAM 182MiB 4.86 31.27 25.50 31.72

Evaluation

We provide the script eval_dcam_adb_tri.py for evaluation.

Citation

If you use this model in your research, please cite this project to acknowledge its contribution.

@article{liu2023dual,
  title={Dual-context aggregation for universal image matting},
  author={Liu, Qinglin and Lv, Xiaoqian and Yu, Wei and Guo, Changyong and Zhang, Shengping},
  journal={Multimedia Tools and Applications},
  pages={1--19},
  year={2023},
  publisher={Springer}
}