3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

RAZER: Robust Accelerated Zero-Shot 3D Open-Vocabulary Panoptic Reconstruction with Spatio-Temporal Aggregation

Add code
May 21, 2025
Viaarxiv icon

TK-Mamba: Marrying KAN with Mamba for Text-Driven 3D Medical Image Segmentation

Add code
May 24, 2025
Viaarxiv icon

3D Can Be Explored In 2D: Pseudo-Label Generation for LiDAR Point Clouds Using Sensor-Intensity-Based 2D Semantic Segmentation

Add code
May 06, 2025
Viaarxiv icon

LOD1 3D City Model from LiDAR: The Impact of Segmentation Accuracy on Quality of Urban 3D Modeling and Morphology Extraction

Add code
May 20, 2025
Viaarxiv icon

Improving Open-Set Semantic Segmentation in 3D Point Clouds by Conditional Channel Capacity Maximization: Preliminary Results

Add code
May 09, 2025
Viaarxiv icon

SeaLion: Semantic Part-Aware Latent Point Diffusion Models for 3D Generation

Add code
May 23, 2025
Viaarxiv icon

TAGS: 3D Tumor-Adaptive Guidance for SAM

Add code
May 21, 2025
Viaarxiv icon

SPARS: Self-Play Adversarial Reinforcement Learning for Segmentation of Liver Tumours

Add code
May 25, 2025
Viaarxiv icon

SELECT: A Submodular Approach for Active LiDAR Semantic Segmentation

Add code
May 06, 2025
Viaarxiv icon

Is Semantic SLAM Ready for Embedded Systems ? A Comparative Survey

Add code
May 18, 2025
Viaarxiv icon