Abstract:In this article, we address the inconsistency of a system of max-min fuzzy relational equations by minimally modifying the matrix governing the system in order to achieve consistency. Our method yields consistent systems that approximate the original inconsistent system in the following sense: the right-hand side vector of each consistent system is that of the inconsistent system, and the coefficients of the matrix governing each consistent system are obtained by modifying, exactly and minimally, the entries of the original matrix that must be corrected to achieve consistency, while leaving all other entries unchanged. To obtain a consistent system that closely approximates the considered inconsistent system, we study the distance (in terms of a norm among $L_1$, $L_2$ or $L_\infty$) between the matrix of the inconsistent system and the set formed by the matrices of consistent systems that use the same right-hand side vector as the inconsistent system. We show that our method allows us to directly compute matrices of consistent systems that use the same right-hand side vector as the inconsistent system whose distance in terms of $L_\infty$ norm to the matrix of the inconsistent system is minimal (the computational costs are higher when using $L_1$ norm or $L_2$ norm). We also give an explicit analytical formula for computing this minimal $L_\infty$ distance. Finally, we translate our results for systems of min-max fuzzy relational equations and present some potential applications.
Abstract:In this article, we introduce a neuro-symbolic approach that combines a low-level perception task performed by a neural network with a high-level reasoning task performed by a possibilistic rule-based system. The goal is to be able to derive for each input instance the degree of possibility that it belongs to a target (meta-)concept. This (meta-)concept is connected to intermediate concepts by a possibilistic rule-based system. The probability of each intermediate concept for the input instance is inferred using a neural network. The connection between the low-level perception task and the high-level reasoning task lies in the transformation of neural network outputs modeled by probability distributions (through softmax activation) into possibility distributions. The use of intermediate concepts is valuable for the explanation purpose: using the rule-based system, the classification of an input instance as an element of the (meta-)concept can be justified by the fact that intermediate concepts have been recognized. From the technical side, our contribution consists of the design of efficient methods for defining the matrix relation and the equation system associated with a possibilistic rule-based system. The corresponding matrix and equation are key data structures used to perform inferences from a possibilistic rule-based system and to learn the values of the rule parameters in such a system according to a training data sample. Furthermore, leveraging recent results on the handling of inconsistent systems of fuzzy relational equations, an approach for learning rule parameters according to multiple training data samples is presented. Experiments carried out on the MNIST addition problems and the MNIST Sudoku puzzles problems highlight the effectiveness of our approach compared with state-of-the-art neuro-symbolic ones.
Abstract:In this article, we introduce a method for learning a capacity underlying a Sugeno integral according to training data based on systems of fuzzy relational equations. To the training data, we associate two systems of equations: a $\max-\min$ system and a $\min-\max$ system. By solving these two systems (in the case that they are consistent) using Sanchez's results, we show that we can directly obtain the extremal capacities representing the training data. By reducing the $\max-\min$ (resp. $\min-\max$) system of equations to subsets of criteria of cardinality less than or equal to $q$ (resp. of cardinality greater than or equal to $n-q$), where $n$ is the number of criteria, we give a sufficient condition for deducing, from its potential greatest solution (resp. its potential lowest solution), a $q$-maxitive (resp. $q$-minitive) capacity. Finally, if these two reduced systems of equations are inconsistent, we show how to obtain the greatest approximate $q$-maxitive capacity and the lowest approximate $q$-minitive capacity, using recent results to handle the inconsistency of systems of fuzzy relational equations.
Abstract:In this article, we study the inconsistency of a system of $\max-T$ fuzzy relational equations of the form $A \Box_{T}^{\max} x = b$, where $T$ is a t-norm among $\min$, the product or Lukasiewicz's t-norm. For an inconsistent $\max-T$ system, we directly construct a canonical maximal consistent subsystem (w.r.t the inclusion order). The main tool used to obtain it is the analytical formula which compute the Chebyshev distance $\Delta = \inf_{c \in \mathcal{C}} \Vert b - c \Vert$ associated to the inconsistent $\max-T$ system, where $\mathcal{C}$ is the set of second members of consistent systems defined with the same matrix $A$. Based on the same analytical formula, we give, for an inconsistent $\max-\min$ system, an efficient method to obtain all its consistent subsystems, and we show how to iteratively get all its maximal consistent subsystems.
Abstract:In this article, we study the inconsistency of systems of $\min-\rightarrow$ fuzzy relational equations. We give analytical formulas for computing the Chebyshev distances $\nabla = \inf_{d \in \mathcal{D}} \Vert \beta - d \Vert$ associated to systems of $\min-\rightarrow$ fuzzy relational equations of the form $\Gamma \Box_{\rightarrow}^{\min} x = \beta$, where $\rightarrow$ is a residual implicator among the G\"odel implication $\rightarrow_G$, the Goguen implication $\rightarrow_{GG}$ or Lukasiewicz's implication $\rightarrow_L$ and $\mathcal{D}$ is the set of second members of consistent systems defined with the same matrix $\Gamma$. The main preliminary result that allows us to obtain these formulas is that the Chebyshev distance $\nabla$ is the lower bound of the solutions of a vector inequality, whatever the residual implicator used. Finally, we show that, in the case of the $\min-\rightarrow_{G}$ system, the Chebyshev distance $\nabla$ may be an infimum, while it is always a minimum for $\min-\rightarrow_{GG}$ and $\min-\rightarrow_{L}$ systems.
Abstract:In this article, we study the inconsistency of a system of $\max$-product fuzzy relational equations and of a system of $\max$-Lukasiewicz fuzzy relational equations. For a system of $\max-\min$ fuzzy relational equations $A \Box_{\min}^{\max} x = b$ and using the $L_\infty$ norm, (Baaj, 2023) showed that the Chebyshev distance $\Delta = \inf_{c \in \mathcal{C}} \Vert b - c \Vert$, where $\mathcal{C}$ is the set of second members of consistent systems defined with the same matrix $A$, can be computed by an explicit analytical formula according to the components of the matrix $A$ and its second member $b$. In this article, we give analytical formulas analogous to that of (Baaj, 2023) to compute the Chebyshev distance associated to the second member of a system of $\max$-product fuzzy relational equations and that associated to the second member of a system of $\max$-Lukasiewicz fuzzy relational equations.
Abstract:In this article, we study the approximate solutions set $\Lambda_b$ of an inconsistent system of $\max-\min$ fuzzy relational equations $(S): A \Box_{\min}^{\max}x =b$. Using the $L_\infty$ norm, we compute by an explicit analytical formula the Chebyshev distance $\Delta~=~\inf_{c \in \mathcal{C}} \Vert b -c \Vert$, where $\mathcal{C}$ is the set of second members of the consistent systems defined with the same matrix $A$. We study the set $\mathcal{C}_b$ of Chebyshev approximations of the second member $b$ i.e., vectors $c \in \mathcal{C}$ such that $\Vert b -c \Vert = \Delta$, which is associated to the approximate solutions set $\Lambda_b$ in the following sense: an element of the set $\Lambda_b$ is a solution vector $x^\ast$ of a system $A \Box_{\min}^{\max}x =c$ where $c \in \mathcal{C}_b$. As main results, we describe both the structure of the set $\Lambda_b$ and that of the set $\mathcal{C}_b$. We then introduce a paradigm for $\max-\min$ learning weight matrices that relates input and output data from training data. The learning error is expressed in terms of the $L_\infty$ norm. We compute by an explicit formula the minimal value of the learning error according to the training data. We give a method to construct weight matrices whose learning error is minimal, that we call approximate weight matrices. Finally, as an application of our results, we show how to learn approximately the rule parameters of a possibilistic rule-based system according to multiple training data.