Alert button
Picture for Fei Lu

Fei Lu

Alert button

Small noise analysis for Tikhonov and RKHS regularizations

May 18, 2023
Quanjun Lang, Fei Lu

Figure 1 for Small noise analysis for Tikhonov and RKHS regularizations
Figure 2 for Small noise analysis for Tikhonov and RKHS regularizations

Regularization plays a pivotal role in ill-posed machine learning and inverse problems. However, the fundamental comparative analysis of various regularization norms remains open. We establish a small noise analysis framework to assess the effects of norms in Tikhonov and RKHS regularizations, in the context of ill-posed linear inverse problems with Gaussian noise. This framework studies the convergence rates of regularized estimators in the small noise limit and reveals the potential instability of the conventional L2-regularizer. We solve such instability by proposing an innovative class of adaptive fractional RKHS regularizers, which covers the L2 Tikhonov and RKHS regularizations by adjusting the fractional smoothness parameter. A surprising insight is that over-smoothing via these fractional RKHSs consistently yields optimal convergence rates, but the optimal hyper-parameter may decay too fast to be selected in practice.

Viaarxiv icon

Benchmarking optimality of time series classification methods in distinguishing diffusions

Feb 05, 2023
Zehong Zhang, Fei Lu, Esther Xu Fei, Terry Lyons, Yannis Kevrekidis, Tom Woolf

Figure 1 for Benchmarking optimality of time series classification methods in distinguishing diffusions
Figure 2 for Benchmarking optimality of time series classification methods in distinguishing diffusions
Figure 3 for Benchmarking optimality of time series classification methods in distinguishing diffusions
Figure 4 for Benchmarking optimality of time series classification methods in distinguishing diffusions

Performance benchmarking is a crucial component of time series classification (TSC) algorithm design, and a fast-growing number of datasets have been established for empirical benchmarking. However, the empirical benchmarks are costly and do not guarantee statistical optimality. This study proposes to benchmark the optimality of TSC algorithms in distinguishing diffusion processes by the likelihood ratio test (LRT). The LRT is optimal in the sense of the Neyman-Pearson lemma: it has the smallest false positive rate among classifiers with a controlled level of false negative rate. The LRT requires the likelihood ratio of the time series to be computable. The diffusion processes from stochastic differential equations provide such time series and are flexible in design for generating linear or nonlinear time series. We demonstrate the benchmarking with three scalable state-of-the-art TSC algorithms: random forest, ResNet, and ROCKET. Test results show that they can achieve LRT optimality for univariate time series and multivariate Gaussian processes. However, these model-agnostic algorithms are suboptimal in classifying nonlinear multivariate time series from high-dimensional stochastic interacting particle systems. Additionally, the LRT benchmark provides tools to analyze the dependence of classification accuracy on the time length, dimension, temporal sampling frequency, and randomness of the time series. Thus, the LRT with diffusion processes can systematically and efficiently benchmark the optimality of TSC algorithms and may guide their future improvements.

* 21 pages, 8 figures 
Viaarxiv icon

A Data-Adaptive Prior for Bayesian Learning of Kernels in Operators

Dec 29, 2022
Neil K. Chada, Quanjun Lang, Fei Lu, Xiong Wang

Figure 1 for A Data-Adaptive Prior for Bayesian Learning of Kernels in Operators
Figure 2 for A Data-Adaptive Prior for Bayesian Learning of Kernels in Operators
Figure 3 for A Data-Adaptive Prior for Bayesian Learning of Kernels in Operators
Figure 4 for A Data-Adaptive Prior for Bayesian Learning of Kernels in Operators

Kernels are efficient in representing nonlocal dependence and they are widely used to design operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem can be severely ill-posed with a data-dependent singular inversion operator. The Bayesian approach overcomes the ill-posedness through a non-degenerate prior. However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator. We introduce a data-adaptive prior to achieve a stable posterior whose mean always has a small noise limit. The data-adaptive prior's covariance is the inversion operator with a hyper-parameter selected adaptive to data by the L-curve method. Furthermore, we provide a detailed analysis on the computational practice of the data-adaptive prior, and demonstrate it on Toeplitz matrices and integral operators. Numerical tests show that a fixed prior can lead to a divergent posterior mean in the presence of any of the four types of errors: discretization error, model error, partial observation and wrong noise assumption. In contrast, the data-adaptive prior always attains posterior means with small noise limits.

* 30 pages, 8 figures 
Viaarxiv icon

Unsupervised learning of observation functions in state-space models by nonparametric moment methods

Jul 12, 2022
Qingci An, Yannis Kevrekidis, Fei Lu, Mauro Maggioni

Figure 1 for Unsupervised learning of observation functions in state-space models by nonparametric moment methods
Figure 2 for Unsupervised learning of observation functions in state-space models by nonparametric moment methods
Figure 3 for Unsupervised learning of observation functions in state-space models by nonparametric moment methods
Figure 4 for Unsupervised learning of observation functions in state-space models by nonparametric moment methods

We investigate the unsupervised learning of non-invertible observation functions in nonlinear state-space models. Assuming abundant data of the observation process along with the distribution of the state process, we introduce a nonparametric generalized moment method to estimate the observation function via constrained regression. The major challenge comes from the non-invertibility of the observation function and the lack of data pairs between the state and observation. We address the fundamental issue of identifiability from quadratic loss functionals and show that the function space of identifiability is the closure of a RKHS that is intrinsic to the state process. Numerical results show that the first two moments and temporal correlations, along with upper and lower bounds, can identify functions ranging from piecewise polynomials to smooth functions, leading to convergent estimators. The limitations of this method, such as non-identifiability due to symmetry and stationarity, are also discussed.

Viaarxiv icon

Nonparametric learning of kernels in nonlocal operators

May 23, 2022
Fei Lu, Qingci An, Yue Yu

Figure 1 for Nonparametric learning of kernels in nonlocal operators
Figure 2 for Nonparametric learning of kernels in nonlocal operators
Figure 3 for Nonparametric learning of kernels in nonlocal operators
Figure 4 for Nonparametric learning of kernels in nonlocal operators

Nonlocal operators with integral kernels have become a popular tool for designing solution maps between function spaces, due to their efficiency in representing long-range dependence and the attractive feature of being resolution-invariant. In this work, we provide a rigorous identifiability analysis and convergence study for the learning of kernels in nonlocal operators. It is found that the kernel learning is an ill-posed or even ill-defined inverse problem, leading to divergent estimators in the presence of modeling errors or measurement noises. To resolve this issue, we propose a nonparametric regression algorithm with a novel data adaptive RKHS Tikhonov regularization method based on the function space of identifiability. The method yields a noisy-robust convergent estimator of the kernel as the data resolution refines, on both synthetic and real-world datasets. In particular, the method successfully learns a homogenized model for the stress wave propagation in a heterogeneous solid, revealing the unknown governing laws from real-world data at microscale. Our regularization method outperforms baseline methods in robustness, generalizability and accuracy.

Viaarxiv icon

Data adaptive RKHS Tikhonov regularization for learning kernels in operators

Mar 08, 2022
Fei Lu, Quanjun Lang, Qingci An

Figure 1 for Data adaptive RKHS Tikhonov regularization for learning kernels in operators
Figure 2 for Data adaptive RKHS Tikhonov regularization for learning kernels in operators
Figure 3 for Data adaptive RKHS Tikhonov regularization for learning kernels in operators
Figure 4 for Data adaptive RKHS Tikhonov regularization for learning kernels in operators

We present DARTR: a Data Adaptive RKHS Tikhonov Regularization method for the linear inverse problem of nonparametric learning of function parameters in operators. A key ingredient is a system intrinsic data-adaptive (SIDA) RKHS, whose norm restricts the learning to take place in the function space of identifiability. DARTR utilizes this norm and selects the regularization parameter by the L-curve method. We illustrate its performance in examples including integral operators, nonlinear operators and nonlocal operators with discrete synthetic data. Numerical results show that DARTR leads to an accurate estimator robust to both numerical error due to discrete data and noise in data, and the estimator converges at a consistent rate as the data mesh refines under different levels of noises, outperforming two baseline regularizers using $l^2$ and $L^2$ norms.

Viaarxiv icon

Identifiability of interaction kernels in mean-field equations of interacting particles

Jun 10, 2021
Quanjun Lang, Fei Lu

Figure 1 for Identifiability of interaction kernels in mean-field equations of interacting particles
Figure 2 for Identifiability of interaction kernels in mean-field equations of interacting particles
Figure 3 for Identifiability of interaction kernels in mean-field equations of interacting particles

We study the identifiability of the interaction kernels in mean-field equations for intreacting particle systems. The key is to identify function spaces on which a probabilistic loss functional has a unique minimizer. We prove that identifiability holds on any subspace of two reproducing kernel Hilbert spaces (RKHS), whose reproducing kernels are intrinsic to the system and are data-adaptive. Furthermore, identifiability holds on two ambient L2 spaces if and only if the integral operators associated with the reproducing kernels are strictly positive. Thus, the inverse problem is ill-posed in general. We also discuss the implications of identifiability in computational practice.

Viaarxiv icon

Domain Adaptive Monocular Depth Estimation With Semantic Information

Apr 12, 2021
Fei Lu, Hyeonwoo Yu, Jean Oh

Figure 1 for Domain Adaptive Monocular Depth Estimation With Semantic Information
Figure 2 for Domain Adaptive Monocular Depth Estimation With Semantic Information
Figure 3 for Domain Adaptive Monocular Depth Estimation With Semantic Information
Figure 4 for Domain Adaptive Monocular Depth Estimation With Semantic Information

The advent of deep learning has brought an impressive advance to monocular depth estimation, e.g., supervised monocular depth estimation has been thoroughly investigated. However, the large amount of the RGB-to-depth dataset may not be always available since collecting accurate depth ground truth according to the RGB image is a time-consuming and expensive task. Although the network can be trained on an alternative dataset to overcome the dataset scale problem, the trained model is hard to generalize to the target domain due to the domain discrepancy. Adversarial domain alignment has demonstrated its efficacy to mitigate the domain shift on simple image classification tasks in previous works. However, traditional approaches hardly handle the conditional alignment as they solely consider the feature map of the network. In this paper, we propose an adversarial training model that leverages semantic information to narrow the domain gap. Based on the experiments conducted on the datasets for the monocular depth estimation task including KITTI and Cityscapes, the proposed compact model achieves state-of-the-art performance comparable to complex latest models and shows favorable results on boundaries and objects at far distances.

* 8 pages, 5 figures, code will be released soon 
Viaarxiv icon

ISALT: Inference-based schemes adaptive to large time-stepping for locally Lipschitz ergodic systems

Feb 25, 2021
Xingjie Li, Fei Lu, Felix X. -F. Ye

Figure 1 for ISALT: Inference-based schemes adaptive to large time-stepping for locally Lipschitz ergodic systems
Figure 2 for ISALT: Inference-based schemes adaptive to large time-stepping for locally Lipschitz ergodic systems
Figure 3 for ISALT: Inference-based schemes adaptive to large time-stepping for locally Lipschitz ergodic systems
Figure 4 for ISALT: Inference-based schemes adaptive to large time-stepping for locally Lipschitz ergodic systems

Efficient simulation of SDEs is essential in many applications, particularly for ergodic systems that demand efficient simulation of both short-time dynamics and large-time statistics. However, locally Lipschitz SDEs often require special treatments such as implicit schemes with small time-steps to accurately simulate the ergodic measure. We introduce a framework to construct inference-based schemes adaptive to large time-steps (ISALT) from data, achieving a reduction in time by several orders of magnitudes. The key is the statistical learning of an approximation to the infinite-dimensional discrete-time flow map. We explore the use of numerical schemes (such as the Euler-Maruyama, a hybrid RK4, and an implicit scheme) to derive informed basis functions, leading to a parameter inference problem. We introduce a scalable algorithm to estimate the parameters by least squares, and we prove the convergence of the estimators as data size increases. We test the ISALT on three non-globally Lipschitz SDEs: the 1D double-well potential, a 2D multi-scale gradient system, and the 3D stochastic Lorenz equation with degenerate noise. Numerical results show that ISALT can tolerate time-step magnitudes larger than plain numerical schemes. It reaches optimal accuracy in reproducing the invariant measure when the time-step is medium-large.

* 20 pages, 9 figures 
Viaarxiv icon