3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

SUM Parts: Benchmarking Part-Level Semantic Segmentation of Urban Meshes

Add code
Mar 21, 2025
Viaarxiv icon

PSA-SSL: Pose and Size-aware Self-Supervised Learning on LiDAR Point Clouds

Add code
Mar 18, 2025
Viaarxiv icon

Texture2LoD3: Enabling LoD3 Building Reconstruction With Panoramic Images

Add code
Apr 07, 2025
Viaarxiv icon

Seg2Box: 3D Object Detection by Point-Wise Semantics Supervision

Add code
Mar 21, 2025
Viaarxiv icon

Adaptive Transformer Attention and Multi-Scale Fusion for Spine 3D Segmentation

Add code
Mar 17, 2025
Viaarxiv icon

Point Cloud Based Scene Segmentation: A Survey

Add code
Mar 16, 2025
Viaarxiv icon

Semantic Consistent Language Gaussian Splatting for Point-Level Open-vocabulary Querying

Add code
Mar 27, 2025
Viaarxiv icon

Hyperdimensional Uncertainty Quantification for Multimodal Uncertainty Fusion in Autonomous Vehicles Perception

Add code
Mar 25, 2025
Viaarxiv icon

Scene4U: Hierarchical Layered 3D Scene Reconstruction from Single Panoramic Image for Your Immerse Exploration

Add code
Apr 01, 2025
Viaarxiv icon

COB-GS: Clear Object Boundaries in 3DGS Segmentation Based on Boundary-Adaptive Gaussian Splitting

Add code
Mar 26, 2025
Viaarxiv icon