3D Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

Ambiguity-aware Point Cloud Segmentation by Adaptive Margin Contrastive Learning

Add code
Jul 09, 2025
Viaarxiv icon

PointVDP: Learning View-Dependent Projection by Fireworks Rays for 3D Point Cloud Segmentation

Add code
Jul 09, 2025
Viaarxiv icon

Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion

Add code
Jul 08, 2025
Viaarxiv icon

LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion

Add code
Jul 03, 2025
Viaarxiv icon

How Well Does GPT-4o Understand Vision? Evaluating Multimodal Foundation Models on Standard Computer Vision Tasks

Add code
Jul 02, 2025
Viaarxiv icon

Prompt Guidance and Human Proximal Perception for HOT Prediction with Regional Joint Loss

Add code
Jul 02, 2025
Viaarxiv icon

TSDASeg: A Two-Stage Model with Direct Alignment for Interactive Point Cloud Segmentation

Add code
Jun 26, 2025
Viaarxiv icon

PanSt3R: Multi-view Consistent Panoptic Segmentation

Add code
Jun 26, 2025
Viaarxiv icon

A Survey of Multi-sensor Fusion Perception for Embodied AI: Background, Methods, Challenges and Prospects

Add code
Jun 24, 2025
Viaarxiv icon

SAM4D: Segment Anything in Camera and LiDAR Streams

Add code
Jun 26, 2025
Viaarxiv icon