Alert button
Picture for Ruikai Cui

Ruikai Cui

Alert button

Adaptive Low Rank Adaptation of Segment Anything to Salient Object Detection

Aug 10, 2023
Ruikai Cui, Siyuan He, Shi Qiu

Figure 1 for Adaptive Low Rank Adaptation of Segment Anything to Salient Object Detection

Foundation models, such as OpenAI's GPT-3 and GPT-4, Meta's LLaMA, and Google's PaLM2, have revolutionized the field of artificial intelligence. A notable paradigm shift has been the advent of the Segment Anything Model (SAM), which has exhibited a remarkable capability to segment real-world objects, trained on 1 billion masks and 11 million images. Although SAM excels in general object segmentation, it lacks the intrinsic ability to detect salient objects, resulting in suboptimal performance in this domain. To address this challenge, we present the Segment Salient Object Model (SSOM), an innovative approach that adaptively fine-tunes SAM for salient object detection by harnessing the low-rank structure inherent in deep learning. Comprehensive qualitative and quantitative evaluations across five challenging RGB benchmark datasets demonstrate the superior performance of our approach, surpassing state-of-the-art methods.

* 13 pages, 0 figures 
Viaarxiv icon

Model Calibration in Dense Classification with Adaptive Label Perturbation

Aug 03, 2023
Jiawei Liu, Changkun Ye, Shan Wang, Ruikai Cui, Jing Zhang, Kaihao Zhang, Nick Barnes

Figure 1 for Model Calibration in Dense Classification with Adaptive Label Perturbation
Figure 2 for Model Calibration in Dense Classification with Adaptive Label Perturbation
Figure 3 for Model Calibration in Dense Classification with Adaptive Label Perturbation
Figure 4 for Model Calibration in Dense Classification with Adaptive Label Perturbation

For safety-related applications, it is crucial to produce trustworthy deep neural networks whose prediction is associated with confidence that can represent the likelihood of correctness for subsequent decision-making. Existing dense binary classification models are prone to being over-confident. To improve model calibration, we propose Adaptive Stochastic Label Perturbation (ASLP) which learns a unique label perturbation level for each training image. ASLP employs our proposed Self-Calibrating Binary Cross Entropy (SC-BCE) loss, which unifies label perturbation processes including stochastic approaches (like DisturbLabel), and label smoothing, to correct calibration while maintaining classification rates. ASLP follows Maximum Entropy Inference of classic statistical mechanics to maximise prediction entropy with respect to missing information. It performs this while: (1) preserving classification accuracy on known data as a conservative solution, or (2) specifically improves model calibration degree by minimising the gap between the prediction accuracy and expected confidence of the target training label. Extensive results demonstrate that ASLP can significantly improve calibration degrees of dense binary classification models on both in-distribution and out-of-distribution data. The code is available on https://github.com/Carlisle-Liu/ASLP.

* Accepted by ICCV 2023 
Viaarxiv icon

P2C: Self-Supervised Point Cloud Completion from Single Partial Clouds

Jul 27, 2023
Ruikai Cui, Shi Qiu, Saeed Anwar, Jiawei Liu, Chaoyue Xing, Jing Zhang, Nick Barnes

Figure 1 for P2C: Self-Supervised Point Cloud Completion from Single Partial Clouds
Figure 2 for P2C: Self-Supervised Point Cloud Completion from Single Partial Clouds
Figure 3 for P2C: Self-Supervised Point Cloud Completion from Single Partial Clouds
Figure 4 for P2C: Self-Supervised Point Cloud Completion from Single Partial Clouds

Point cloud completion aims to recover the complete shape based on a partial observation. Existing methods require either complete point clouds or multiple partial observations of the same object for learning. In contrast to previous approaches, we present Partial2Complete (P2C), the first self-supervised framework that completes point cloud objects using training samples consisting of only a single incomplete point cloud per object. Specifically, our framework groups incomplete point clouds into local patches as input and predicts masked patches by learning prior information from different partial objects. We also propose Region-Aware Chamfer Distance to regularize shape mismatch without limiting completion capability, and devise the Normal Consistency Constraint to incorporate a local planarity assumption, encouraging the recovered shape surface to be continuous and complete. In this way, P2C no longer needs multiple observations or complete point clouds as ground truth. Instead, structural cues are learned from a category-specific dataset to complete partial point clouds of objects. We demonstrate the effectiveness of our approach on both synthetic ShapeNet data and real-world ScanNet data, showing that P2C produces comparable results to methods trained with complete shapes, and outperforms methods learned with multiple partial observations. Code is available at https://github.com/CuiRuikai/Partial2Complete.

* Accepted to ICCV 2023 
Viaarxiv icon

Energy-Based Residual Latent Transport for Unsupervised Point Cloud Completion

Nov 13, 2022
Ruikai Cui, Shi Qiu, Saeed Anwar, Jing Zhang, Nick Barnes

Figure 1 for Energy-Based Residual Latent Transport for Unsupervised Point Cloud Completion
Figure 2 for Energy-Based Residual Latent Transport for Unsupervised Point Cloud Completion
Figure 3 for Energy-Based Residual Latent Transport for Unsupervised Point Cloud Completion
Figure 4 for Energy-Based Residual Latent Transport for Unsupervised Point Cloud Completion

Unsupervised point cloud completion aims to infer the whole geometry of a partial object observation without requiring partial-complete correspondence. Differing from existing deterministic approaches, we advocate generative modeling based unsupervised point cloud completion to explore the missing correspondence. Specifically, we propose a novel framework that performs completion by transforming a partial shape encoding into a complete one using a latent transport module, and it is designed as a latent-space energy-based model (EBM) in an encoder-decoder architecture, aiming to learn a probability distribution conditioned on the partial shape encoding. To train the latent code transport module and the encoder-decoder network jointly, we introduce a residual sampling strategy, where the residual captures the domain gap between partial and complete shape latent spaces. As a generative model-based framework, our method can produce uncertainty maps consistent with human perception, leading to explainable unsupervised point cloud completion. We experimentally show that the proposed method produces high-fidelity completion results, outperforming state-of-the-art models by a significant margin.

* BMVC 2022 paper 
Viaarxiv icon