3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Enhancing Human-Robot Collaboration: A Sim2Real Domain Adaptation Algorithm for Point Cloud Segmentation in Industrial Environments

Add code
Jun 11, 2025
Viaarxiv icon

GS4: Generalizable Sparse Splatting Semantic SLAM

Add code
Jun 06, 2025
Viaarxiv icon

Point-MoE: Towards Cross-Domain Generalization in 3D Semantic Segmentation via Mixture-of-Experts

Add code
May 29, 2025
Viaarxiv icon

seg_3D_by_PC2D: Multi-View Projection for Domain Generalization and Adaptation in 3D Semantic Segmentation

Add code
May 21, 2025
Viaarxiv icon

ReCoGNet: Recurrent Context-Guided Network for 3D MRI Prostate Segmentation

Add code
Jun 24, 2025
Viaarxiv icon

Prompt Guidance and Human Proximal Perception for HOT Prediction with Regional Joint Loss

Add code
Jul 02, 2025
Viaarxiv icon

Point Cloud Segmentation of Agricultural Vehicles using 3D Gaussian Splatting

Add code
Jun 05, 2025
Viaarxiv icon

Genesis: Multimodal Driving Scene Generation with Spatio-Temporal and Cross-Modal Consistency

Add code
Jun 09, 2025
Viaarxiv icon

AnchorDP3: 3D Affordance Guided Sparse Diffusion Policy for Robotic Manipulation

Add code
Jun 24, 2025
Viaarxiv icon

NeurNCD: Novel Class Discovery via Implicit Neural Representation

Add code
Jun 06, 2025
Viaarxiv icon