3D Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

G2P: Gaussian-to-Point Attribute Alignment for Boundary-Aware 3D Semantic Segmentation

Add code
Jan 07, 2026
Viaarxiv icon

3D Conditional Image Synthesis of Left Atrial LGE MRI from Composite Semantic Masks

Add code
Jan 08, 2026
Viaarxiv icon

UniLiPs: Unified LiDAR Pseudo-Labeling with Geometry-Grounded Dynamic Scene Decomposition

Add code
Jan 08, 2026
Viaarxiv icon

Systematic Evaluation of Depth Backbones and Semantic Cues for Monocular Pseudo-LiDAR 3D Detection

Add code
Jan 07, 2026
Viaarxiv icon

Leveraging 2D-VLM for Label-Free 3D Segmentation in Large-Scale Outdoor Scene Understanding

Add code
Jan 05, 2026
Viaarxiv icon

A Vision-Language-Action Model with Visual Prompt for OFF-Road Autonomous Driving

Add code
Jan 07, 2026
Viaarxiv icon

PackCache: A Training-Free Acceleration Method for Unified Autoregressive Video Generation via Compact KV-Cache

Add code
Jan 07, 2026
Viaarxiv icon

Staged Voxel-Level Deep Reinforcement Learning for 3D Medical Image Segmentation with Noisy Annotations

Add code
Jan 07, 2026
Viaarxiv icon

Joint Semantic and Rendering Enhancements in 3D Gaussian Modeling with Anisotropic Local Encoding

Add code
Jan 05, 2026
Viaarxiv icon

Rethinking Multimodal Few-Shot 3D Point Cloud Segmentation: From Fused Refinement to Decoupled Arbitration

Add code
Jan 04, 2026
Viaarxiv icon