Stable invariant sets are an essential notion in the analysis and application of dynamical systems. It is thus of great interest to learn dynamical systems with provable existence of stable invariant sets. However, existing methods can only deal with the stability of discrete equilibria, which hinders many applications. In this paper, we propose a method to ensure that a learned dynamics model has a stable invariant set of general classes. To this end, we modify a base dynamics model using a learnable Lyapunov-like function so that the modified dynamics attain the invariance and the stability of a specific subset. We model such a subset by transforming primitive shapes (e.g., spheres) via a learnable bijective function. We may specify such a primitive shape following prior knowledge of the dynamics if any, or it can also be learned from data. We introduce an example of the implementation of the proposed dynamics models using neural networks and present experimental results that show the validity of the proposed method.
Anomaly localization is an essential problem as anomaly detection is. Because a rigorous localization requires a causal model of a target system, practically we often resort to a relaxed problem of anomaly interpretation, for which we are to obtain meaningful attribution of anomaly scores to input features. In this paper, we investigate the use of the Shapley value for anomaly interpretation. We focus on the semi-supervised anomaly detection and newly propose a characteristic function, on which the Shapley value is computed, specifically for anomaly scores. The idea of the proposed method is approximating the absence of some features by minimizing an anomaly score with regard to them. We examine the performance of the proposed method as well as other general approaches to computing the Shapley value in interpreting anomaly scores. We show the results of experiments on multiple datasets and anomaly detection methods, which indicate the usefulness of the Shapley-based anomaly interpretation toward anomaly localization.
Kernel methods have been among the most popular techniques in machine learning, where learning tasks are solved using the property of reproducing kernel Hilbert space (RKHS). In this paper, we propose a novel data analysis framework with reproducing kernel Hilbert $C^*$-module (RKHM), which is another generalization of RKHS than vector-valued RKHS (vv-RKHS). Analysis with RKHMs enables us to deal with structures among variables more explicitly than vv-RKHS. We show the theoretical validity for the construction of orthonormal systems in Hilbert $C^*$-modules, and derive concrete procedures for orthonormalization in RKHMs with those theoretical properties in numerical computations. Moreover, we apply those to generalize with RKHM kernel principal component analysis and the analysis of dynamical systems with Perron-Frobenius operators. The empirical performance of our methods is also investigated by using synthetic and real-world data.
Operator-theoretic analysis of nonlinear dynamical systems has attracted much attention in a variety of engineering and scientific fields, endowed with practical estimation methods using data such as dynamic mode decomposition. In this paper, we address a lifted representation of nonlinear dynamical systems with random noise based on transfer operators, and develop a novel Krylov subspace method for estimating it using finite data, with consideration of the unboundedness of operators. For this purpose, we first consider Perron-Frobenius operators with kernel-mean embeddings for such systems. Then, we extend the Arnoldi method, which is the most classical type of Kryov subspace methods, so that it can be applied to the current case. Meanwhile, the Arnoldi method requires the assumption that the operator is bounded, which is not necessarily satisfied for transfer operators on nonlinear systems. We accordingly develop the shift-invert Arnoldi method for the Perron-Frobenius operators to avoid this problem. Also, we describe a way of evaluating the predictive accuracy by estimated operators on the basis of the maximum mean discrepancy, which is applicable, for example, to anomaly detection in complex systems. The empirical performance of our methods is investigated using synthetic and real-world healthcare data.
The development of a metric on structural data-generating mechanisms is fundamental in machine learning and the related fields. In this paper, we consider a general framework to construct metrics on {\em random} nonlinear dynamical systems, which are defined with the Perron-Frobenius operators in vector-valued reproducing kernel Hilbert spaces (vvRKHSs). Here, vvRKHSs are employed to design mathematically manageable metrics and also to introduce $L^2(\Omega)$-valued kernels, which are necessary to handle the randomness in systems. Our metric is a natural extension of existing metrics for {\em deterministic} systems, and can give a specification of the kernel maximal mean discrepancy of random processes. Moreover, by considering the time-wise independence of random processes, we discuss the connection between our metric and the independence criteria with kernels such as Hilbert-Schmidt independence criteria. We empirically illustrate our metric with synthetic data, and evaluate it in the context of the independence test for random processes.
Understanding complex network dynamics is a fundamental issue in various scientific and engineering fields. Network theory is capable of revealing the relationship between elements and their propagation; however, for complex collective motions, the network properties often transiently and complexly change. A fundamental question addressed here pertains to the classification of collective motion network based on physically-interpretable dynamical properties. Here we apply a data-driven spectral analysis called graph dynamic mode decomposition, which obtains the dynamical properties for collective motion classification. Using a ballgame as an example, we classified the strategic collective motions in different global behaviours and discovered that, in addition to the physical properties, the contextual node information was critical for classification. Furthermore, we discovered the label-specific stronger spectra in the relationship among the nearest agents, providing physical and semantic interpretations. Our approach contributes to the understanding of complex networks involving collective motions from the perspective of nonlinear dynamical systems.
When approaching to problems in computer science, we often encounter situations where a subset of a finite set maximizing some utility function needs to be selected. Some of such utility functions are known to be approximately submodular. For the problem of maximizing an approximately submodular function (ASFM problem), a greedy algorithm quickly finds good feasible solutions for many instances while guaranteeing ($1-e^{-\gamma}$)-approximation ratio for a given submodular ratio $\gamma$. However, we still encounter its applications that ask more accurate or exactly optimal solutions within a reasonable computation time. In this paper, we present an efficient branch-and-cut algorithm for the non-decreasing ASFM problem based on its binary integer programming (BIP) formulation with an exponential number of constraints. To this end, we first derive a BIP formulation of the ASFM problem and then, develop an improved constraint generation algorithm that starts from a reduced BIP problem with a small subset of constraints and repeats solving the reduced BIP problem while adding a promising set of constraints at each iteration. Moreover, we incorporate it into a branch-and-cut algorithm to attain good upper bounds while solving a smaller number of nodes of a search tree. The computational results for three types of well-known benchmark instances show that our algorithm performs better than the conventional exact algorithms.
Generative modeling is a fundamental problem in machine learning with many potential applications. Efficient learning of generative models requires available prior knowledge to be exploited as much as possible. In this paper, we propose a method to exploit prior knowledge of relative dependence between features for learning generative models. Such knowledge is available, for example, when side-information on features is present. We incorporate the prior knowledge by forcing marginals of the learned generative model to follow a prescribed relative feature dependence. To this end, we formulate a regularization term using a kernel-based dependence criterion. The proposed method can be incorporated straightforwardly into many optimization-based learning schemes of generative models, including variational autoencoders and generative adversarial networks. We show the effectiveness of the proposed method in experiments with multiple types of datasets and models.
The submodular function maximization is an attractive optimization model that appears in many real applications. Although a variety of greedy algorithms quickly find good feasible solutions for many instances while guaranteeing (1-1/e)-approximation ratio, we still encounter many real applications that ask optimal or better feasible solutions within reasonable computation time. In this paper, we present an efficient branch-and-bound algorithm for the non-decreasing submodular function maximization problem based on its binary integer programming (BIP) formulation with a huge number of constraints. Nemhauser and Wolsey developed an exact algorithm called the constraint generation algorithm that starts from a reduced BIP problem with a small subset of constraints taken from the constraints and repeats solving a reduced BIP problem while adding a new constraint at each iteration. However, their algorithm is still computationally expensive due to many reduced BIP problems to be solved. To overcome this, we propose an improved constraint generation algorithm to add a promising set of constraints at each iteration. We incorporate it into a branch-and-bound algorithm to attain good upper bounds while solving a smaller number of reduced BIP problems. According to computational results for well-known benchmark instances, our algorithm achieved better performance than the state-of-the-art exact algorithms.
The development of a metric for structural data is a long-term problem in pattern recognition and machine learning. In this paper, we develop a general metric for comparing nonlinear dynamical systems that is defined with Perron-Frobenius operators in reproducing kernel Hilbert spaces. Our metric includes the existing fundamental metrics for dynamical systems, which are basically defined with principal angles between some appropriately-chosen subspaces, as its special cases. We also describe the estimation of our metric from finite data. We empirically illustrate our metric with an example of rotation dynamics in a unit disk in a complex plane, and evaluate the performance with real-world time-series data.