The literature on reachability analysis methods for neural networks currently only focuses on uncertainties on the network's inputs. In this paper, we introduce two new approaches for the reachability analysis of neural networks with additional uncertainties on their internal parameters (weight matrices and bias vectors of each layer), which may open the field of formal methods on neural networks to new topics, such as safe training or network repair. The first and main method that we propose relies on existing reachability analysis approach based on mixed monotonicity (initially introduced for dynamical systems). The second proposed approach extends the ESIP (Error-based Symbolic Interval Propagation) approach which was first implemented in the verification tool Neurify, and first mentioned in the publication of the tool VeriNet. Although the ESIP approach has been shown to often outperform the mixed-monotonicity reachability analysis in the classical case with uncertainties only on the network's inputs, we show in this paper through numerical simulations that the situation is greatly reversed (in terms of precision, computation time, memory usage, and broader applicability) when dealing with uncertainties on the weights and biases.
This paper presents a new reachability analysis tool to compute an interval over-approximation of the output set of a feedforward neural network under given input uncertainty. The proposed approach adapts to neural networks an existing mixed-monotonicity method for the reachability analysis of dynamical systems and applies it to all possible partial networks within the given neural network. This ensures that the intersection of the obtained results is the tightest interval over-approximation of the output of each layer that can be obtained using mixed-monotonicity. Unlike other tools in the literature that focus on small classes of piecewise-affine or monotone activation functions, the main strength of our approach is its generality in the sense that it can handle neural networks with any Lipschitz-continuous activation function. In addition, the simplicity of the proposed framework allows users to very easily add unimplemented activation functions, by simply providing the function, its derivative and the global extrema and corresponding arguments of the derivative. Our algorithm is tested and compared to five other interval-based tools on 1000 randomly generated neural networks for four activation functions (ReLU, TanH, ELU, SiLU). We show that our tool always outperforms the Interval Bound Propagation method and that we obtain tighter output bounds than ReluVal, Neurify, VeriNet and CROWN (when they are applicable) in 15 to 60 percent of cases.