/Lotus

Official Implementation of Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction

Primary LanguagePythonApache License 2.0Apache-2.0

lotus Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction

Page Paper HuggingFace Demo HuggingFace Demo

Jing He1, Haodong Li1, Wei Yin2, Yixun Liang1, Leheng Li1, Kaiqiang Zhou3, Hongbo Zhang3, Bingbing Liu3, Ying-Cong Chen1,4✉

1HKUST(GZ) 2University of Adelaide 3Noah's Ark Lab 4HKUST
Both authors contributed equally. Corresponding author.

teaser teaser

We present Lotus, a diffusion-based visual foundation model for dense geometry prediction. With minimal training data, Lotus achieves SoTA performance in two key geometry perception tasks, i.e., zero-shot depth and normal estimation. "Avg. Rank" indicates the average ranking across all metrics, where lower values are better. Bar length represents the amount of training data used.

📢 News

  • 2024-10-06: The demos are now available (Depth & Normal). Video depth & normal predictions are also supported. Please have a try!
  • 2024-10-05: The inference code is now available. Paper is updated to v2.
  • 2024-09-26: Paper released.

🛠️ Setup

This installation was tested on: Ubuntu 20.04 LTS, Python 3.9, CUDA 12.3, NVIDIA A800-SXM4-80GB.

  1. Clone the repository (requires git):
git clone https://github.com/EnVision-Research/Lotus.git
cd Lotus
  1. Install dependencies (requires conda):
conda create -n lotus python=3.9 -y
conda activate lotus
pip install -r requirements.txt 

🕹️ Usage

Testing on your images

  1. Place your images in a directory, for example, under assets/in-the-wild_example (where we have prepared several examples).
  2. Run the inference command: bash infer.sh.

Evaluation on benchmark datasets

  1. Prepare benchmark datasets:

    cd datasets/eval/depth/
    
    wget -r -np -nH --cut-dirs=4 -R "index.html*" -P . https://share.phys.ethz.ch/~pf/bingkedata/marigold/evaluation_dataset/
    
  2. Run the evaluation command: bash eval.sh

Choose your model

We offer four models in total, here are the corresponding configurations:

CHECKPOINT_DIR TASK_NAME MODE
jingheya/lotus-depth-g-v1-0 depth generation
jingheya/lotus-depth-d-v1-0 depth regression
jingheya/lotus-normal-g-v1-0 normal generation
jingheya/lotus-normal-d-v1-0 normal regression

🎓 Citation

If you find our work useful in your research, please consider citing our paper:

@article{he2024lotus,
    title={Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction},
    author={He, Jing and Li, Haodong and Yin, Wei and Liang, Yixun and Li, Leheng and Zhou, Kaiqiang and Liu, Hongbo and Liu, Bingbing and Chen, Ying-Cong},
    journal={arXiv preprint arXiv:2409.18124},
    year={2024}
}