SAMPro3D: Locating SAM Prompts in 3D for Zero-Shot Instance Segmentation

1SSE, CUHKSZ    2FNii, CUHKSZ    3Microsoft Research Asia
Corresponding Author

3DV 2025

MY ALT TEXT

We introduce SAMPro3D for zero-shot 3D instance segmentation.

Abstract

We introduce SAMPro3D for zero-shot instance segmentation of 3D scenes. Given the 3D point cloud and multiple posed RGB-D frames of 3D scenes, our approach segments 3D instances by applying the pretrained Segment Anything Model (SAM) to 2D frames. Our key idea involves locating SAM prompts in 3D to align their projected pixel prompts across frames, ensuring the view consistency of SAM-predicted masks. Moreover, we suggest selecting prompts from the initial set guided by the information of SAM-predicted masks across all views, which enhances the overall performance. We further propose to consolidate different prompts if they are segmenting different surface parts of the same 3D instance, bringing a more comprehensive segmentation. Notably, our method does not require any additional training. Extensive experiments on diverse benchmarks show that our method achieves comparable or better performance compared to previous zero-shot or fully supervised approaches, and in many cases surpasses human annotations. Furthermore, since our fine-grained predictions often lack annotations in available datasets, we present ScanNet200-Fine50 test data which provides fine-grained annotations on 50 scenes from ScanNet200 dataset.


Key Idea

MY ALT TEXT

The comparison of our key idea and others. Our method (b) locates SAM prompts in 3D, which aligns pixel prompts across frames, bringing the frame consistency of prompts and their masks, and can handle newly emerged instances.


Method Overview

MY ALT TEXT

An overview of our framework, with a primary focus on "prompt". Given 3D scene point clouds with posed RGB-D frames, we locate SAM prompts in input 3D scenes and project them onto 2D frames to obtain 2D segmentation masks. Later, the initial prompts and their masks are selected and consolidated, leveraging both multi-view and surface information. Finally, we project all input points onto each segmented frame to obtain the 3D segmentation result.

Animated Qualitative Comparison

Qualitative Comparison

BibTeX

@article{xu2023sampro3d,
        title={SAMPro3D: Locating SAM Prompts in 3D for Zero-Shot Scene Segmentation}, 
        author={Mutian Xu and Xingyilang Yin and Lingteng Qiu and Yang Liu and Xin Tong and Xiaoguang Han},
        year={2023},
        journal={arXiv preprint arXiv:2311.17707}
  }