3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Articulated 3D Scene Graphs for Open-World Mobile Manipulation

Add code
Feb 18, 2026
Viaarxiv icon

LaSSM: Efficient Semantic-Spatial Query Decoding via Local Aggregation and State Space Models for 3D Instance Segmentation

Add code
Feb 11, 2026
Viaarxiv icon

Clutt3R-Seg: Sparse-view 3D Instance Segmentation for Language-grounded Grasping in Cluttered Scenes

Add code
Feb 12, 2026
Viaarxiv icon

COOPERTRIM: Adaptive Data Selection for Uncertainty-Aware Cooperative Perception

Add code
Feb 07, 2026
Viaarxiv icon

Splat and Distill: Augmenting Teachers with Feed-Forward 3D Reconstruction For 3D-Aware Distillation

Add code
Feb 05, 2026
Viaarxiv icon

INHerit-SG: Incremental Hierarchical Semantic Scene Graphs with RAG-Style Retrieval

Add code
Feb 13, 2026
Viaarxiv icon

SHED Light on Segmentation for Dense Prediction

Add code
Jan 30, 2026
Viaarxiv icon

Seeing Through Clutter: Structured 3D Scene Reconstruction via Iterative Object Removal

Add code
Feb 03, 2026
Viaarxiv icon

Split&Splat: Zero-Shot Panoptic Segmentation via Explicit Instance Modeling and 3D Gaussian Splatting

Add code
Feb 01, 2026
Viaarxiv icon

Deep Learning for Semantic Segmentation of 3D Ultrasound Data

Add code
Jan 19, 2026
Viaarxiv icon