3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Uncertainty-aware Prototype Learning with Variational Inference for Few-shot Point Cloud Segmentation

Add code
Mar 20, 2026
Viaarxiv icon

TCATSeg: A Tooth Center-Wise Attention Network for 3D Dental Model Semantic Segmentation

Add code
Mar 17, 2026
Viaarxiv icon

Reconstruction Matters: Learning Geometry-Aligned BEV Representation through 3D Gaussian Splatting

Add code
Mar 19, 2026
Viaarxiv icon

Spatially-Aware Evaluation Framework for Aerial LiDAR Point Cloud Semantic Segmentation: Distance-Based Metrics on Challenging Regions

Add code
Mar 23, 2026
Viaarxiv icon

OVI-MAP:Open-Vocabulary Instance-Semantic Mapping

Add code
Mar 27, 2026
Viaarxiv icon

UrbanVGGT: Scalable Sidewalk Width Estimation from Street View Images

Add code
Mar 23, 2026
Viaarxiv icon

Active Robotic Perception for Disease Detection and Mapping in Apple Trees

Add code
Mar 24, 2026
Viaarxiv icon

Sketch2CT: Multimodal Diffusion for Structure-Aware 3D Medical Volume Generation

Add code
Mar 23, 2026
Viaarxiv icon

FAST3DIS: Feed-forward Anchored Scene Transformer for 3D Instance Segmentation

Add code
Mar 27, 2026
Viaarxiv icon

DriveTok: 3D Driving Scene Tokenization for Unified Multi-View Reconstruction and Understanding

Add code
Mar 19, 2026
Viaarxiv icon