The efficient rendering and explicit nature of 3DGS promote the advancement of 3D scene manipulation. However, existing methods typically encounter challenges in controlling the manipulation region and are unable to furnish the user with interactive feedback, which inevitably leads to unexpected results. Intuitively, incorporating interactive 3D segmentation tools can compensate for this deficiency. Nevertheless, existing segmentation frameworks impose a pre-processing step of scene-specific parameter training, which limits the efficiency and flexibility of scene manipulation.
To deliver a 3D region control module that is well-suited for scene manipulation with reliable efficiency, we propose interactive Segment-and-Manipulate 3D Gaussians (iSegMan), an interactive segmentation and manipulation framework that only requires simple 2D user interactions in any view. To propagate user interactions to other views, we propose Epipolar-guided Interaction Propagation (EIP), which innovatively exploits epipolar constraint for efficient and robust interaction matching. To avoid scene-specific training to maintain efficiency, we further propose the novel Visibility-based Gaussian Voting (VGV), which obtains 2D segmentations from SAM and models the region extraction as a voting game between 2D Pixels and 3D Gaussians based on Gaussian visibility. Taking advantage of the efficient and precise region control of EIP and VGV, we put forth a Manipulation Toolbox to implement various functions on selected regions, enhancing the controllability, flexibility and practicality of scene manipulation. Extensive results on 3D scene manipulation and segmentation tasks fully demonstrate the significant advantages of iSegMan.
iSegMan contains two novel region control algorithms that are well-suited for scene manipulation with reliable efficiency: Epipolar-guided Interaction Propagation (EIP) and Visibility-based Voting Game (VGV), and a Manipulation Toolbox that includes various manipulation functions. EIP accepts 2D user interactions in any view and leverages epipolar constraint to efficiently and robustly propagate user interactions to other views. To avoid scene-specific training to maintain efficiency, VGV obtains 2D mask from SAM and then models the 3D region extraction as a voting game between 2D Pixels and 3D Gaussians based on Gaussian visibility. Based on the versatile manipulation functions, iSegMan greatly enhances the controllability, flexibility and practicality of 3D scene manipulation.