/SegmentAnythingin3D

Segment Anything in 3D with NeRFs

Primary LanguagePython

Segment AnythingπŸ€–οΈ in 3D with NeRFs (SA3D)

Segment Anything in 3D with NeRFs
Jiazhong Cen*1, Zanwei Zhou*1, Jiemin Fang2, Chen Yang1, Wei Shen1βœ‰, Lingxi Xie3, Dongsheng Jiang3, Xiaopeng Zhang3, Qi Tian3
1AI Institute, SJTU   2School of EIC, HUST   3Huawei Inc.
*denotes equal contribution

Given a NeRF, just input prompts from one single view and then get your 3D model.

We propose a novel framework to Segment Anything in 3D, named SA3D. Given a neural radiance field (NeRF) model, SA3D allows users to obtain the 3D segmentation result of any target object via only one-shot manual prompting in a single rendered view. The entire process for obtaining the target 3D model can be completed in approximately 2 minutes, yet without any engineering optimization. Our experiments demonstrate the effectiveness of SA3D in different scenes, highlighting the potential of SAM in 3D scene perception.

Update

  • 2023/06/29: We now support MobileSAM as the segmentation network. Follow the installation instruction in MobileSAM, and then download mobile_sam.pt into folder ./dependencies/sam_ckpt. You can use --mobile_sam to switch to MobileSAM.

Overall Pipeline

SA3D_pipeline

With input prompts, SAM cuts out the target object from the according view. The obtained 2D segmentation mask is projected onto 3D mask grids via density-guided inverse rendering. 2D masks from other views are then rendered, which are mostly uncompleted but used as cross-view self-prompts to be fed into SAM again. Complete masks can be obtained and projected onto mask grids. This procedure is executed via an iterative manner while accurate 3D masks can be finally learned. SA3D can adapt to various radiance fields effectively without any additional redesigning.

Installation

git clone https://github.com/Jumpat/SegmentAnythingin3D.git
cd SegmentAnythingin3D

conda create -n sa3d python=3.10
pip install -r requirements.txt

SAM and Grounding-DINO:

# Installing SAM
mkdir dependencies; cd dependencies 
mkdir sam_ckpt; cd sam_ckpt
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
git clone git@github.com:facebookresearch/segment-anything.git 
cd segment-anything; pip install -e .

# Installing Grounding-DINO
git clone https://github.com/IDEA-Research/GroundingDINO.git
cd GroundingDINO/; pip install -e .
mkdir weights; cd weights
wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth

Download Data

We now release the configs on these datasets:

Data structure:

(click to expand)
data
β”œβ”€β”€ 360_v2             # Link: https://jonbarron.info/mipnerf360/
β”‚   └── [bicycle|bonsai|counter|garden|kitchen|room|stump]
β”‚       β”œβ”€β”€ poses_bounds.npy
β”‚       └── [images|images_2|images_4|images_8]
β”‚
β”œβ”€β”€ nerf_llff_data     # Link: https://drive.google.com/drive/folders/14boI-o5hGO9srnWaaogTU5_ji7wkX2S7
β”‚   └── [fern|flower|fortress|horns|leaves|orchids|room|trex]
β”‚       β”œβ”€β”€ poses_bounds.npy
β”‚       └── [images|images_2|images_4|images_8]
β”‚
└── lerf_data               # Link: https://drive.google.com/drive/folders/1vh0mSl7v29yaGsxleadcj-LCZOE_WEWB
    └── [book_store|bouquet|donuts|...]
        β”œβ”€β”€ transforms.json
        └── [images|images_2|images_4|images_8]

Usage

  • Train NeRF
    python run.py --config=configs/llff/fern.py --stop_at=20000 --render_video --i_weights=10000
  • Run SA3D in GUI
    python run_seg_gui.py --config=configs/llff/seg/seg_fern.py --segment \
    --sp_name=_gui --num_prompts=20 \
    --render_opt=train --save_ckpt
  • Render and Save Fly-through Videos
    python run_seg_gui.py --config=configs/llff/seg/seg_fern.py --segment \
    --sp_name=_gui --num_prompts=20 \
    --render_only --render_opt=video --dump_images \
    --seg_type seg_img seg_density

Some tips when run SA3D:

  • Increase --num_prompts when the target object is extremely irregular like LLFF scenes Fern and Trex;
  • Use --seg_poses to specify the camera pose sequence used for training 3D mask, default='train', choices=['train', 'video'].

Using our Dash based GUI:

  • Select which type of prompt to be used, currently support: Point Prompt and Text Prompt;

    • Point Prompt: select Points in the drop down; click the original image to add a point prompt, then SAM will produce candidate masks; click Clear Points to clear out the previous inputs;

      point_prompt.mp4
    • Text Prompt: select Text in the drop down;input your text prompt and click Generate to get candidate masks; note that unreasonable text input may cause error.

      text_prompt.mp4
  • Select your target mask;

  • Press Start Training to run SA3D; we visualize rendered masks and SAM predictions produced by our cross-view self-prompting stategy;

    start_train.mp4
  • Wait a few minutes to see the final rendering results.

    results.mp4

TODO List

  • Refine the GUI, e.g., start from any train view, add more training hyper-parameter options, etc.;
  • Support the two-pass stage in GUI; currently it may have some bugs.

Some Visualization Samples

SA3D can handle various scenes for 3D segmentation. Find more demos in our project page.

Forward facing 360Β° Multi-objects

Acknowledgements

Thanks for the following project for their valuable contributions:

Citation

If you find this project helpful for your research, please consider citing the report and giving a ⭐.

@article{cen2023segment,
      title={Segment Anything in 3D with NeRFs}, 
      author={Jiazhong Cen and Zanwei Zhou and Jiemin Fang and Wei Shen and Lingxi Xie and Xiaopeng Zhang and Qi Tian},
      journal={arXiv:2304.12308},
      year={2023}
}