Interpretability is the next frontier in machine learning research. In the search for white box models - as opposed to black box models, like random forests or neural networks - rule induction algorithms are a logical and promising option, since the rules can easily be understood by humans. Fuzzy and rough set theory have been successfully applied to this archetype, almost always separately. As both approaches to rule induction involve granular computing based on the concept of equivalence classes, it is natural to combine them. The QuickRules\cite{JensenCornelis2009} algorithm was a first attempt at using fuzzy rough set theory for rule induction. It is based on QuickReduct, a greedy algorithm for building decision reducts. QuickRules already showed an improvement over other rule induction methods. However, to evaluate the full potential of a fuzzy rough rule induction algorithm, one needs to start from the foundations. In this paper, we introduce a novel rule induction algorithm called Fuzzy Rough Rule Induction (FRRI). We provide background and explain the workings of our algorithm. Furthermore, we perform a computational experiment to evaluate the performance of our algorithm and compare it to other state-of-the-art rule induction approaches. We find that our algorithm is more accurate while creating small rulesets consisting of relatively short rules. We end the paper by outlining some directions for future work.
Dominance-based Rough Approach (DRSA) has been proposed as a machine learning and knowledge discovery methodology to handle Multiple Criteria Decision Aiding (MCDA). Due to its capacity of asking the decision maker (DM) for simple preference information and supplying easily understandable and explainable recommendations, DRSA gained much interest during the years and it is now one of the most appreciated MCDA approaches. In fact, it has been applied also beyond MCDA domain, as a general knowledge discovery and data mining methodology for the analysis of monotonic (and also non-monotonic) data. In this contribution, we recall the basic principles and the main concepts of DRSA, with a general overview of its developments and software. We present also a historical reconstruction of the genesis of the methodology, with a specific focus on the contribution of Roman S{\l}owi\'nski.
In this article, a new Fuzzy Granular Approximation Classifier (FGAC) is introduced. The classifier is based on the previously introduced concept of the granular approximation and its multi-class classification case. The classifier is instance-based and its biggest advantage is its local transparency i.e., the ability to explain every individual prediction it makes. We first develop the FGAC for the binary classification case and the multi-class classification case and we discuss its variation that includes the Ordered Weighted Average (OWA) operators. Those variations of the FGAC are then empirically compared with other locally transparent ML methods. At the end, we discuss the transparency of the FGAC and its advantage over other locally transparent methods. We conclude that while the FGAC has similar predictive performance to other locally transparent ML models, its transparency can be superior in certain cases.
In granular computing, fuzzy sets can be approximated by granularly representable sets that are as close as possible to the original fuzzy set w.r.t. a given closeness measure. Such sets are called granular approximations. In this article, we introduce the concepts of disjoint and adjacent granules and we examine how the new definitions affect the granular approximations. First, we show that the new concepts are important for binary classification problems since they help to keep decision regions separated (disjoint granules) and at the same time to cover as much as possible of the attribute space (adjacent granules). Later, we consider granular approximations for multi-class classification problems leading to the definition of a multi-class granular approximation. Finally, we show how to efficiently calculate multi-class granular approximations for {\L}ukasiewicz fuzzy connectives. We also provide graphical illustrations for a better understanding of the introduced concepts.
Inconsistency in prediction problems occurs when instances that relate in a certain way on condition attributes, do not follow the same relation on the decision attribute. For example, in ordinal classification with monotonicity constraints, it occurs when an instance dominating another instance on condition attributes has been assigned to a worse decision class. It typically appears as a result of perturbation in data caused by incomplete knowledge (missing attributes) or by random effects that occur during data generation (instability in the assessment of decision attribute values). Inconsistencies with respect to a crisp preorder relation (expressing either dominance or indiscernibility between instances) can be handled using symbolic approaches like rough set theory and by using statistical/machine learning approaches that involve optimization methods. Fuzzy rough sets can also be seen as a symbolic approach to inconsistency handling with respect to a fuzzy relation. In this article, we introduce a new machine learning method for inconsistency handling with respect to a fuzzy preorder relation. The novel approach is motivated by the existing machine learning approach used for crisp relations. We provide statistical foundations for it and develop optimization procedures that can be used to eliminate inconsistencies. The article also proves important properties and contains didactic examples of those procedures.
Despite the high accuracy offered by state-of-the-art deep natural-language models (e.g. LSTM, BERT), their application in real-life settings is still widely limited, as they behave like a black-box to the end-user. Hence, explainability is rapidly becoming a fundamental requirement of future-generation data-driven systems based on deep-learning approaches. Several attempts to fulfill the existing gap between accuracy and interpretability have been done. However, robust and specialized xAI (Explainable Artificial Intelligence) solutions tailored to deep natural-language models are still missing. We propose a new framework, named T-EBAnO, which provides innovative prediction-local and class-based model-global explanation strategies tailored to black-box deep natural-language models. Given a deep NLP model and the textual input data, T-EBAnO provides an objective, human-readable, domain-specific assessment of the reasons behind the automatic decision-making process. Specifically, the framework extracts sets of interpretable features mining the inner knowledge of the model. Then, it quantifies the influence of each feature during the prediction process by exploiting the novel normalized Perturbation Influence Relation index at the local level and the novel Global Absolute Influence and Global Relative Influence indexes at the global level. The effectiveness and the quality of the local and global explanations obtained with T-EBAnO are proved on (i) a sentiment analysis task performed by a fine-tuned BERT model, and (ii) a toxic comment classification task performed by an LSTM model.
A variant of the classical knapsack problem is considered in which each item is associated with an integer weight and a qualitative level. We define a dominance relation over the feasible subsets of the given item set and show that this relation defines a preorder. We propose a dynamic programming algorithm to compute the entire set of non-dominated rank cardinality vectors and we state two greedy algorithms, which efficiently compute a single efficient solution.
This paper presents an integrated methodology supporting decisions in urban planning. In particular, it deals with the prioritization and the selection of a portfolio of projects related to buildings of some values for the cultural heritage in cities. In particular, our methodology has been validated to the historical center of Naples, Italy. Each project is assessed on the basis of a set of both quantitative and qualitative criteria with the purpose to determine their level of priority for further selection. This step was performed through the application of the Electre Tri-nC method. This method is a multiple criteria outranking based model for ordinal classification (or sorting) problems and allows to assign a priority level to each project as an analytical `recommendation' tool. A set of resources (namely budgetary constraints) as well as some logical constraints related to urban policy requirements have to be taken into consideration together with the priority of projects in a portfolio analysis model permitting to identify the efficient portfolios and to support the selection of the most adequate set of projects to activate. The process has been conducted thanks to the interaction between analysts, municipality representative and experts. The proposed methodology is generic enough to be applied in other territorial or urban planning problems. More precisely, given the increasing interest of historical cities to restore their cultural heritage the integrated multiple criteria decision aiding analytical tool proposed in this paper has an important potential of being used in the future.
Categorization by Similarity-Dissimilarity is a multiple criteria decision aiding method for dealing with nominal classification problems (predefined and non-ordered categories). Actions are assessed according to multiple criteria and assigned to one or more categories. A set of reference actions is used for defining each category. The assignment of an action to a given category depends on the comparison of the action to the reference set according to a likeness threshold. Distinct sets of criteria weights, interaction coefficients, and likeness thresholds can be defined per category. When applying Cat-SD to complex decision problems, considering a hierarchy of criteria may help to decompose them. We propose to apply Multiple Criteria Hierarchy Process (MCHP) to Cat-SD. An adapted MCHP is proposed to take into account possible interaction effects between criteria structured in a hierarchical way. We also consider an imprecise elicitation of parameters. With the purpose of exploring the assignments obtained by Cat-SD considering possible sets of parameters, we propose to apply the Stochastic Multicriteria Acceptability Analysis (SMAA). The SMAA methodology allows to draw statistical conclusions on the classification of the actions. The proposed method, SMAA-hCat-SD, helps the decision maker to check the effects of the variation of parameters on the classification at different levels of the hierarchy. We propose also a procedure to obtain a final classification fulfilling some requirements by taking into account the hierarchy of criteria and the probabilistic assignments obtained applying SMAA. The application of the proposed method is showed through an example.
We propose a development of the Analytic Hierarchy Process (AHP) permitting to use the methodology also in cases of decision problems with a very large number of alternatives evaluated with respect to several criteria. While the application of the original AHP method involves many pairwise comparisons between alternatives and criteria, our proposal is composed of three steps: (i) direct evaluation of the alternatives at hand on the considered criteria, (ii) selection of some reference evaluations; (iii) application of the original AHP method to reference evaluations; (iv) revision of the direct evaluation on the basis of the prioritization supplied by AHP on reference evaluations. The new proposal has been tested and validated in an experiment conducted on a sample of university students. The new methodology has been therefore applied to a real world problem involving the evaluation of 21 Social Housing initiatives sited in the Piedmont region (Italy). To take into account interaction between criteria, the Choquet integral preference model has been considered within a Non Additive Robust Ordinal Regression approach.