3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Active Semantic Mapping of Horticultural Environments Using Gaussian Splatting

Add code
Jan 17, 2026
Viaarxiv icon

Active Cross-Modal Visuo-Tactile Perception of Deformable Linear Objects

Add code
Jan 20, 2026
Viaarxiv icon

Medical SAM3: A Foundation Model for Universal Prompt-Driven Medical Image Segmentation

Add code
Jan 15, 2026
Viaarxiv icon

3D Conditional Image Synthesis of Left Atrial LGE MRI from Composite Semantic Masks

Add code
Jan 08, 2026
Viaarxiv icon

Rethinking Multimodal Few-Shot 3D Point Cloud Segmentation: From Fused Refinement to Decoupled Arbitration

Add code
Jan 04, 2026
Viaarxiv icon

SemanticBridge - A Dataset for 3D Semantic Segmentation of Bridges and Domain Gap Analysis

Add code
Dec 18, 2025
Viaarxiv icon

Joint Semantic and Rendering Enhancements in 3D Gaussian Modeling with Anisotropic Local Encoding

Add code
Jan 05, 2026
Viaarxiv icon

Graph Smoothing for Enhanced Local Geometry Learning in Point Cloud Analysis

Add code
Jan 16, 2026
Viaarxiv icon

UniLiPs: Unified LiDAR Pseudo-Labeling with Geometry-Grounded Dynamic Scene Decomposition

Add code
Jan 08, 2026
Viaarxiv icon

Systematic Evaluation of Depth Backbones and Semantic Cues for Monocular Pseudo-LiDAR 3D Detection

Add code
Jan 07, 2026
Viaarxiv icon