3D Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

LightSplat: Fast and Memory-Efficient Open-Vocabulary 3D Scene Understanding in Five Seconds

Add code
Mar 25, 2026
Viaarxiv icon

Active Robotic Perception for Disease Detection and Mapping in Apple Trees

Add code
Mar 24, 2026
Viaarxiv icon

Benchmarking Deep Learning Models for Aerial LiDAR Point Cloud Semantic Segmentation under Real Acquisition Conditions: A Case Study in Navarre

Add code
Mar 23, 2026
Viaarxiv icon

Grounding Vision and Language to 3D Masks for Long-Horizon Box Rearrangement

Add code
Mar 24, 2026
Viaarxiv icon

NeuroSeg Meets DINOv3: Transferring 2D Self-Supervised Visual Priors to 3D Neuron Segmentation via DINOv3 Initialization

Add code
Mar 24, 2026
Viaarxiv icon

Spatially-Aware Evaluation Framework for Aerial LiDAR Point Cloud Semantic Segmentation: Distance-Based Metrics on Challenging Regions

Add code
Mar 23, 2026
Viaarxiv icon

UrbanVGGT: Scalable Sidewalk Width Estimation from Street View Images

Add code
Mar 23, 2026
Viaarxiv icon

Sketch2CT: Multimodal Diffusion for Structure-Aware 3D Medical Volume Generation

Add code
Mar 23, 2026
Viaarxiv icon

UniFunc3D: Unified Active Spatial-Temporal Grounding for 3D Functionality Segmentation

Add code
Mar 24, 2026
Viaarxiv icon

Uncertainty-aware Prototype Learning with Variational Inference for Few-shot Point Cloud Segmentation

Add code
Mar 20, 2026
Viaarxiv icon