3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

HAMMER: Harnessing MLLM via Cross-Modal Integration for Intention-Driven 3D Affordance Grounding

Add code
Mar 02, 2026
Viaarxiv icon

Universal 3D Shape Matching via Coarse-to-Fine Language Guidance

Add code
Feb 24, 2026
Viaarxiv icon

Align then Adapt: Rethinking Parameter-Efficient Transfer Learning in 4D Perception

Add code
Feb 26, 2026
Viaarxiv icon

ArtHOI: Articulated Human-Object Interaction Synthesis by 4D Reconstruction from Video Priors

Add code
Mar 04, 2026
Viaarxiv icon

Fake It Right: Injecting Anatomical Logic into Synthetic Supervised Pre-training for Medical Segmentation

Add code
Mar 01, 2026
Viaarxiv icon

FocusTrack: One-Stage Focus-and-Suppress Framework for 3D Point Cloud Object Tracking

Add code
Feb 27, 2026
Viaarxiv icon

Open-vocabulary 3D scene perception in industrial environments

Add code
Feb 23, 2026
Viaarxiv icon

Viewpoint Recommendation for Point Cloud Labeling through Interaction Cost Modeling

Add code
Feb 11, 2026
Viaarxiv icon

Monocular Open Vocabulary Occupancy Prediction for Indoor Scenes

Add code
Feb 26, 2026
Viaarxiv icon

SO3UFormer: Learning Intrinsic Spherical Features for Rotation-Robust Panoramic Segmentation

Add code
Feb 26, 2026
Viaarxiv icon