GS4: Generalizable Sparse Splatting Semantic SLAM

Under Review
Collaborative Robotics and Intelligent Systems (CoRIS) Institute
Oregon State University

Abstract

Traditional SLAM algorithms are excellent at camera tracking but might generate lower resolution and incomplete 3D maps. Recently, Gaussian Splatting (GS) approaches have emerged as an option for SLAM with accurate, dense 3D map building. However, existing GS-based SLAM methods rely on per-scene optimization which is time-consuming and does not generalize to diverse scenes well. In this work, we introduce the first generalizable GS-based semantic SLAM algorithm that incrementally builds and updates a 3D scene representation from an RGB-D video stream using a learned generalizable network. Our approach starts from an RGB-D image recognition backbone to predict the Gaussian parameters from every downsampled and backprojected image location. Additionally, we seamlessly integrate 3D semantic segmentation into our GS framework, bridging 3D mapping and recognition through a shared backbone. To correct localization drifting and floaters, we propose to optimize the GS for only 1 iteration following global localization. We demonstrate state-of-the-art semantic SLAM performance on the real-world benchmark ScanNet with an order of magnitude fewer Gaussians compared to other recent GS-based methods, and showcase our model's generalization capability through zero-shot transfer to the NYUv2 and TUM RGB-D dataset.

Overview

MY ALT TEXT

Overview of the SLAM System. At each timestep, the system receives an RGB-D frame as input. The tracking system performs local camera tracking and global localization to determine the current frame's pose and correct previous pose errors. Our 3D mapping process comprises three main components: 1) Gaussian Prediction (Sec 3.2.1): Utilizing the current frame's RGB-D data, the Gaussian Prediction Model estimates the parameters and semantic labels for all Gaussians in the current frame; 2) Gaussian Refinement (Sec 3.2.2): Both newly added Gaussians and those in the existing semantic 3D map are refined using the Gaussian Refinement Network to ensure that the combined set of Gaussians accurately represents the scene. A covisibility check ensures that only non-overlapping Gaussians are integrated into the existing 3D map. Post-refinement, the transparent Gaussians are pruned; 3) One-Iteration Gaussian Optimization (Sec 3.3.2): If significant pose corrections happen, a one-iteration Gaussian optimization is performed to update the 3D map's Gaussians, ensuring consistency with the revised camera poses. (Best viewed in color)

Rendering Performance

GS4 demonstrates superior performance on high-quality RGB-D datasets. On the challenging ScanNet dataset, our method achieves state-of-the-art results, outperforming previous approaches with a 14.6% improvement in rendering quality (PSNR) over the runner-up Point-SLAM, and a 23.2% improvement in semantic rendering accuracy (mIoU) over the runner-up SGS-SLAM.

BibTeX

@misc{jiang2025gs4,
      title={GS4: Generalizable Sparse Splatting Semantic SLAM}, 
      author={Mingqi Jiang and Chanho Kim and Chen Ziwen and Li Fuxin},
      year={2025},
      eprint={2506.06517},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.06517}, 
    }