Abstract:Multiview clustering (MC) aims to group samples using consistent and complementary information across various views. The subspace clustering, as a fundamental technique of MC, has attracted significant attention. In this paper, we propose a novel joint sparse self-representation learning model for MC, where a featured difference is the extraction of view-specific local information by introducing cardinality (i.e., $\ell_0$-norm) constraints instead of Graph-Laplacian regularization. Specifically, under each view, cardinality constraints directly restrict the samples used in the self-representation stage to extract reliable local and global structure information, while the low-rank constraint aids in revealing a global coherent structure in the consensus affinity matrix during merging. The attendant challenge is that Augmented Lagrange Method (ALM)-based alternating minimization algorithms cannot guarantee convergence when applied directly to our nonconvex, nonsmooth model, thus resulting in poor generalization ability. To address it, we develop an alternating quadratic penalty (AQP) method with global convergence, where two subproblems are iteratively solved by closed-form solutions. Empirical results on six standard datasets demonstrate the superiority of our model and AQP method, compared to eight state-of-the-art algorithms.
Abstract:Accurate parameter identification in photovoltaic (PV) models is crucial for performance evaluation but remains challenging due to their nonlinear, multimodal, and high-dimensional nature. Although the Dung Beetle Optimization (DBO) algorithm has shown potential in addressing such problems, it often suffers from premature convergence. To overcome these issues, this paper proposes a Memory Enhanced Fractional-Order Dung Beetle Optimization (MFO-DBO) algorithm that integrates three coordinated strategies. Firstly, fractional-order (FO) calculus introduces memory into the search process, enhancing convergence stability and solution quality. Secondly, a fractional-order logistic chaotic map improves population diversity during initialization. Thirdly, a chaotic perturbation mechanism helps elite solutions escape local optima. Numerical results on the CEC2017 benchmark suite and the PV parameter identification problem demonstrate that MFO-DBO consistently outperforms advanced DBO variants, CEC competition winners, FO-based optimizers, enhanced classical algorithms, and recent metaheuristics in terms of accuracy, robustness, convergence speed, while also maintaining an excellent balance between exploration and exploitation compared to the standard DBO algorithm.