3D Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

BEA-GS: BEyond RAdiance Supervision in 3DGS for Precise Object Extraction

Add code
May 10, 2026
Viaarxiv icon

OpenGaFF: Open-Vocabulary Gaussian Feature Field with Codebook Attention

Add code
May 07, 2026
Viaarxiv icon

FUS3DMaps: Scalable and Accurate Open-Vocabulary Semantic Mapping by 3D Fusion of Voxel- and Instance-Level Layers

Add code
May 05, 2026
Viaarxiv icon

Training-Free Dense Hand Contact Estimation with Multi-Modal Large Language Models

Add code
May 07, 2026
Viaarxiv icon

Anny-Fit: All-Age Human Mesh Recovery

Add code
May 06, 2026
Viaarxiv icon

A Bayesian Approach for Task-Specific Next-Best-View Selection with Uncertain Geometry

Add code
May 06, 2026
Viaarxiv icon

Ilov3Splat: Instance-Level Open-Vocabulary 3D Scene Understanding in Gaussian Splatting

Add code
May 06, 2026
Viaarxiv icon

RD-ViT: Recurrent-Depth Vision Transformer for Semantic Segmentation with Reduced Data Dependence Extending the Recurrent-Depth Transformer Architecture to Dense Prediction

Add code
May 05, 2026
Viaarxiv icon

INSIGHT: Indoor Scene Intelligence from Geometric-Semantic Hierarchy Transfer for Public~Safety

Add code
Apr 25, 2026
Viaarxiv icon

BIMStruct3D: A Fully Automated Hybrid Learning Scan-to-BIM Pipeline with Integrated Topology Refinement

Add code
Apr 27, 2026
Viaarxiv icon