/pix2gestalt

Code for the paper "pix2gestalt: Amodal Segmentation by Synthesizing Wholes"

OtherNOASSERTION

pix2gestalt: Amodal Segmentation by Synthesizing Wholes

In Submission

pix2gestalt: Amodal Segmentation by Synthesizing Wholes
Ege Ozguroglu1, Ruoshi Liu1, Dídac Surís1, Dian Chen2, Achal Dave2, Pavel Tokmakov2, Carl Vondrick1
1Columbia University, 2Toyota Research Institute

teaser

Updates

  • Our paper is now available on arXiv!
  • Pre-trained weights and preliminary Gradio demo coming soon.
  • Stay tuned for the release of our training & inference scripts!

Citation

If you use this code, please consider citing the paper as:

@misc{ozguroglu2024pix2gestalt,
      title={pix2gestalt: Amodal Segmentation by Synthesizing Wholes}, 
      author={Ege Ozguroglu and Ruoshi Liu and Dídac Surís and Dian Chen and Achal Dave and Pavel Tokmakov and Carl Vondrick},
      year={2024},
      eprint={2401.14398},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

This research is based on work partially supported by the Toyota Research Institute, the DARPA MCS program under Federal Agreement No. N660011924032, the NSF NRI Award #1925157, and the NSF AI Institute for Artificial and Natural Intelligence Award #2229929. DS is supported by the Microsoft PhD Fellowship.