Abstract:Fair classification is a critical challenge that has gained increasing importance due to international regulations and its growing use in high-stakes decision-making settings. Existing methods often rely on adversarial learning or distribution matching across sensitive groups; however, adversarial learning can be unstable, and distribution matching can be computationally intensive. To address these limitations, we propose a novel approach based on the characteristic function distance. Our method ensures that the learned representation contains minimal sensitive information while maintaining high effectiveness for downstream tasks. By utilizing characteristic functions, we achieve a more stable and efficient solution compared to traditional methods. Additionally, we introduce a simple relaxation of the objective function that guarantees fairness in common classification models with no performance degradation. Experimental results on benchmark datasets demonstrate that our approach consistently matches or achieves better fairness and predictive accuracy than existing methods. Moreover, our method maintains robustness and computational efficiency, making it a practical solution for real-world applications.
Abstract:Conventional techniques for imposing monotonicity in MLPs by construction involve the use of non-negative weight constraints and bounded activation functions, which pose well-known optimization challenges. In this work, we generalize previous theoretical results, showing that MLPs with non-negative weight constraint and activations that saturate on alternating sides are universal approximators for monotonic functions. Additionally, we show an equivalence between the saturation side in the activations and the sign of the weight constraint. This connection allows us to prove that MLPs with convex monotone activations and non-positive constrained weights also qualify as universal approximators, in contrast to their non-negative constrained counterparts. Our results provide theoretical grounding to the empirical effectiveness observed in previous works while leading to possible architectural simplification. Moreover, to further alleviate the optimization difficulties, we propose an alternative formulation that allows the network to adjust its activations according to the sign of the weights. This eliminates the requirement for weight reparameterization, easing initialization and improving training stability. Experimental evaluation reinforces the validity of the theoretical results, showing that our novel approach compares favourably to traditional monotonic architectures.
Abstract:This letter presents a novel approach in the field of Active Fault Detection (AFD), by explicitly separating the task into two parts: Passive Fault Detection (PFD) and control input design. This formulation is very general, and most existing AFD literature can be viewed through this lens. By recognizing this separation, PFD methods can be leveraged to provide components that make efficient use of the available information, while the control input is designed in order to optimize the gathering of information. The core contribution of this work is FIERL, a general simulation-based approach for the design of such control strategies, using Constrained Reinforcement Learning (CRL) to optimize the performance of arbitrary passive detectors. The control policy is learned without the need of knowing the passive detector inner workings, making FIERL broadly applicable. However, it is especially useful when paired with the design of an efficient passive component. Unlike most AFD approaches, FIERL can handle fairly complex scenarios such as continuous sets of fault modes. The effectiveness of FIERL is tested on a benchmark problem for actuator fault diagnosis, where FIERL is shown to be fairly robust, being able to generalize to fault dynamics not seen in training.