Abstract:Semantic segmentation is one of the most fundamental tasks in image understanding with a long history of research, and subsequently a myriad of different approaches. Traditional methods strive to train models up from scratch, requiring vast amounts of computational resources and training data. In the advent of moving to open-vocabulary semantic segmentation, which asks models to classify beyond learned categories, large quantities of finely annotated data would be prohibitively expensive. Researchers have instead turned to training-free methods where they leverage existing models made for tasks where data is more easily acquired. Specifically, this survey will cover the history, nuance, idea development and the state-of-the-art in training-free open-vocabulary semantic segmentation that leverages existing multi-modal classification models. We will first give a preliminary on the task definition followed by an overview of popular model archetypes and then spotlight over 30 approaches split into broader research branches: purely CLIP-based, those leveraging auxiliary visual foundation models and ones relying on generative methods. Subsequently, we will discuss the limitations and potential problems of current research, as well as provide some underexplored ideas for future study. We believe this survey will serve as a good onboarding read to new researchers and spark increased interest in the area.
Abstract:Domain adaptive panoptic segmentation promises to resolve the long tail of corner cases in natural scene understanding. Previous state of the art addresses this problem with cross-task consistency, careful system-level optimization and heuristic improvement of teacher predictions. In contrast, we propose to build upon remarkable capability of mask transformers to estimate their own prediction uncertainty. Our method avoids noise amplification by leveraging fine-grained confidence of panoptic teacher predictions. In particular, we modulate the loss with mask-wide confidence and discourage back-propagation in pixels with uncertain teacher or confident student. Experimental evaluation on standard benchmarks reveals a substantial contribution of the proposed selection techniques. We report 47.4 PQ on Synthia to Cityscapes, which corresponds to an improvement of 6.2 percentage points over the state of the art. The source code is available at https://github.com/helen1c/MC-PanDA.
Abstract:Semantic segmentation is an important and well-known task in the field of computer vision, in which we attempt to assign a corresponding semantic class to each input element. When it comes to semantic segmentation of 2D images, the input elements are pixels. On the other hand, the input can also be a point cloud, where one input element represents one point in the input point cloud. By the term point cloud, we refer to a set of points defined by spatial coordinates with respect to some reference coordinate system. In addition to the position of points in space, other features can also be defined for each point, such as RGB components. In this paper, we conduct semantic segmentation on the S3DIS dataset, where each point cloud represents one room. We train models on the S3DIS dataset, namely PointCNN, PointNet++, Cylinder3D, Point Transformer, and RepSurf. We compare the obtained results with respect to standard evaluation metrics for semantic segmentation and present a comparison of the models based on inference speed.