Alert button
Picture for Zhenyu Liao

Zhenyu Liao

Alert button

On the Equivalence between Implicit and Explicit Neural Networks: A High-dimensional Viewpoint

Aug 31, 2023
Zenan Ling, Zhenyu Liao, Robert C. Qiu

Figure 1 for On the Equivalence between Implicit and Explicit Neural Networks: A High-dimensional Viewpoint

Implicit neural networks have demonstrated remarkable success in various tasks. However, there is a lack of theoretical analysis of the connections and differences between implicit and explicit networks. In this paper, we study high-dimensional implicit neural networks and provide the high dimensional equivalents for the corresponding conjugate kernels and neural tangent kernels. Built upon this, we establish the equivalence between implicit and explicit networks in high dimensions.

* Accepted by Workshop on High-dimensional Learning Dynamics, ICML 2023, Honolulu, Hawaii 
Viaarxiv icon

Analysis and Approximate Inference of Large and Dense Random Kronecker Graphs

Jun 14, 2023
Zhenyu Liao, Yuanqian Xia, Chengmei Niu, Yong Xiao

Figure 1 for Analysis and Approximate Inference of Large and Dense Random Kronecker Graphs
Figure 2 for Analysis and Approximate Inference of Large and Dense Random Kronecker Graphs
Figure 3 for Analysis and Approximate Inference of Large and Dense Random Kronecker Graphs
Figure 4 for Analysis and Approximate Inference of Large and Dense Random Kronecker Graphs

Random graph models are playing an increasingly important role in science and industry, and finds their applications in a variety of fields ranging from social and traffic networks, to recommendation systems and molecular genetics. In this paper, we perform an in-depth analysis of the random Kronecker graph model proposed in \cite{leskovec2010kronecker}, when the number of graph vertices $N$ is large. Built upon recent advances in random matrix theory, we show, in the dense regime, that the random Kronecker graph adjacency matrix follows approximately a signal-plus-noise model, with a small-rank (of order at most $\log N$) signal matrix that is linear in the graph parameters and a random noise matrix having a quarter-circle-form singular value distribution. This observation allows us to propose a ``denoise-and-solve'' meta algorithm to approximately infer the graph parameters, with reduced computational complexity and (asymptotic) performance guarantee. Numerical experiments of graph inference and graph classification on both synthetic and realistic graphs are provided to support the advantageous performance of the proposed approach.

* 27 pages and 3 figures 
Viaarxiv icon

Semantic Image Manipulation with Background-guided Internal Learning

Mar 24, 2022
Zhongping Zhang, Huiwen He, Bryan A. Plummer, Zhenyu Liao, Huayan Wang

Figure 1 for Semantic Image Manipulation with Background-guided Internal Learning
Figure 2 for Semantic Image Manipulation with Background-guided Internal Learning
Figure 3 for Semantic Image Manipulation with Background-guided Internal Learning
Figure 4 for Semantic Image Manipulation with Background-guided Internal Learning

Image manipulation has attracted a lot of interest due to its wide range of applications. Prior work modifies images either from low-level manipulation, such as image inpainting or through manual edits via paintbrushes and scribbles, or from high-level manipulation, employing deep generative networks to output an image conditioned on high-level semantic input. In this study, we propose Semantic Image Manipulation with Background-guided Internal Learning (SIMBIL), which combines high-level and low-level manipulation. Specifically, users can edit an image at the semantic level by applying changes on a scene graph. Then our model manipulates the image at the pixel level according to the modified scene graph. There are two major advantages of our approach. First, high-level manipulation of scene graphs requires less manual effort from the user compared to manipulating raw image pixels. Second, our low-level internal learning approach is scalable to images of various sizes without reliance on external visual datasets for training. We outperform the state-of-the-art in a quantitative and qualitative evaluation on the CLEVR and Visual Genome datasets. Experiments show 8 points improvement on FID scores (CLEVR) and 27% improvement on user evaluation (Visual Genome), demonstrating the effectiveness of our approach.

Viaarxiv icon

Off-policy Reinforcement Learning with Optimistic Exploration and Distribution Correction

Oct 27, 2021
Jiachen Li, Shuo Cheng, Zhenyu Liao, Huayan Wang, William Yang Wang, Qinxun Bai

Figure 1 for Off-policy Reinforcement Learning with Optimistic Exploration and Distribution Correction
Figure 2 for Off-policy Reinforcement Learning with Optimistic Exploration and Distribution Correction
Figure 3 for Off-policy Reinforcement Learning with Optimistic Exploration and Distribution Correction
Figure 4 for Off-policy Reinforcement Learning with Optimistic Exploration and Distribution Correction

Improving sample efficiency of reinforcement learning algorithms requires effective exploration. Following the principle of $\textit{optimism in the face of uncertainty}$, we train a separate exploration policy to maximize an approximate upper confidence bound of the critics in an off-policy actor-critic framework. However, this introduces extra differences between the replay buffer and the target policy in terms of their stationary state-action distributions. To mitigate the off-policy-ness, we adapt the recently introduced DICE framework to learn a distribution correction ratio for off-policy actor-critic training. In particular, we correct the training distribution for both policies and critics. Empirically, we evaluate our proposed method in several challenging continuous control tasks and show superior performance compared to state-of-the-art methods. We also conduct extensive ablation studies to demonstrate the effectiveness and the rationality of the proposed method.

Viaarxiv icon

Fine-Grained Control of Artistic Styles in Image Generation

Oct 25, 2021
Xin Miao, Huayan Wang, Jun Fu, Jiayi Liu, Shen Wang, Zhenyu Liao

Figure 1 for Fine-Grained Control of Artistic Styles in Image Generation
Figure 2 for Fine-Grained Control of Artistic Styles in Image Generation
Figure 3 for Fine-Grained Control of Artistic Styles in Image Generation
Figure 4 for Fine-Grained Control of Artistic Styles in Image Generation

Recent advances in generative models and adversarial training have enabled artificially generating artworks in various artistic styles. It is highly desirable to gain more control over the generated style in practice. However, artistic styles are unlike object categories -- there are a continuous spectrum of styles distinguished by subtle differences. Few works have been explored to capture the continuous spectrum of styles and apply it to a style generation task. In this paper, we propose to achieve this by embedding original artwork examples into a continuous style space. The style vectors are fed to the generator and discriminator to achieve fine-grained control. Our method can be used with common generative adversarial networks (such as StyleGAN). Experiments show that our method not only precisely controls the fine-grained artistic style but also improves image quality over vanilla StyleGAN as measured by FID.

Viaarxiv icon

Random matrices in service of ML footprint: ternary random features with no performance loss

Oct 05, 2021
Hafiz Tiomoko Ali, Zhenyu Liao, Romain Couillet

Figure 1 for Random matrices in service of ML footprint: ternary random features with no performance loss
Figure 2 for Random matrices in service of ML footprint: ternary random features with no performance loss
Figure 3 for Random matrices in service of ML footprint: ternary random features with no performance loss
Figure 4 for Random matrices in service of ML footprint: ternary random features with no performance loss

In this article, we investigate the spectral behavior of random features kernel matrices of the type ${\bf K} = \mathbb{E}_{{\bf w}} \left[\sigma\left({\bf w}^{\sf T}{\bf x}_i\right)\sigma\left({\bf w}^{\sf T}{\bf x}_j\right)\right]_{i,j=1}^n$, with nonlinear function $\sigma(\cdot)$, data ${\bf x}_1, \ldots, {\bf x}_n \in \mathbb{R}^p$, and random projection vector ${\bf w} \in \mathbb{R}^p$ having i.i.d. entries. In a high-dimensional setting where the number of data $n$ and their dimension $p$ are both large and comparable, we show, under a Gaussian mixture model for the data, that the eigenspectrum of ${\bf K}$ is independent of the distribution of the i.i.d.(zero-mean and unit-variance) entries of ${\bf w}$, and only depends on $\sigma(\cdot)$ via its (generalized) Gaussian moments $\mathbb{E}_{z\sim \mathcal N(0,1)}[\sigma'(z)]$ and $\mathbb{E}_{z\sim \mathcal N(0,1)}[\sigma''(z)]$. As a result, for any kernel matrix ${\bf K}$ of the form above, we propose a novel random features technique, called Ternary Random Feature (TRF), that (i) asymptotically yields the same limiting kernel as the original ${\bf K}$ in a spectral sense and (ii) can be computed and stored much more efficiently, by wisely tuning (in a data-dependent manner) the function $\sigma$ and the random vector ${\bf w}$, both taking values in $\{-1,0,1\}$. The computation of the proposed random features requires no multiplication, and a factor of $b$ times less bits for storage compared to classical random features such as random Fourier features, with $b$ the number of bits to store full precision values. Besides, it appears in our experiments on real data that the substantial gains in computation and storage are accompanied with somewhat improved performances compared to state-of-the-art random features compression/quantization methods.

Viaarxiv icon

Hessian Eigenspectra of More Realistic Nonlinear Models

Mar 17, 2021
Zhenyu Liao, Michael W. Mahoney

Figure 1 for Hessian Eigenspectra of More Realistic Nonlinear Models
Figure 2 for Hessian Eigenspectra of More Realistic Nonlinear Models
Figure 3 for Hessian Eigenspectra of More Realistic Nonlinear Models
Figure 4 for Hessian Eigenspectra of More Realistic Nonlinear Models

Given an optimization problem, the Hessian matrix and its eigenspectrum can be used in many ways, ranging from designing more efficient second-order algorithms to performing model analysis and regression diagnostics. When nonlinear models and non-convex problems are considered, strong simplifying assumptions are often made to make Hessian spectral analysis more tractable. This leads to the question of how relevant the conclusions of such analyses are for more realistic nonlinear models. In this paper, we exploit deterministic equivalent techniques from random matrix theory to make a \emph{precise} characterization of the Hessian eigenspectra for a broad family of nonlinear models, including models that generalize the classical generalized linear models, without relying on strong simplifying assumptions used previously. We show that, depending on the data properties, the nonlinear response model, and the loss function, the Hessian can have \emph{qualitatively} different spectral behaviors: of bounded or unbounded support, with single- or multi-bulk, and with isolated eigenvalues on the left- or right-hand side of the bulk. By focusing on such a simple but nontrivial nonlinear model, our analysis takes a step forward to unveil the theoretical origin of many visually striking features observed in more complex machine learning models.

* Identical to v1, except for the inclusion of some additional references 
Viaarxiv icon

Sparse sketches with small inversion bias

Nov 21, 2020
Michał Dereziński, Zhenyu Liao, Edgar Dobriban, Michael W. Mahoney

For a tall $n\times d$ matrix $A$ and a random $m\times n$ sketching matrix $S$, the sketched estimate of the inverse covariance matrix $(A^\top A)^{-1}$ is typically biased: $E[(\tilde A^\top\tilde A)^{-1}]\ne(A^\top A)^{-1}$, where $\tilde A=SA$. This phenomenon, which we call inversion bias, arises, e.g., in statistics and distributed optimization, when averaging multiple independently constructed estimates of quantities that depend on the inverse covariance. We develop a framework for analyzing inversion bias, based on our proposed concept of an $(\epsilon,\delta)$-unbiased estimator for random matrices. We show that when the sketching matrix $S$ is dense and has i.i.d. sub-gaussian entries, then after simple rescaling, the estimator $(\frac m{m-d}\tilde A^\top\tilde A)^{-1}$ is $(\epsilon,\delta)$-unbiased for $(A^\top A)^{-1}$ with a sketch of size $m=O(d+\sqrt d/\epsilon)$. This implies that for $m=O(d)$, the inversion bias of this estimator is $O(1/\sqrt d)$, which is much smaller than the $\Theta(1)$ approximation error obtained as a consequence of the subspace embedding guarantee for sub-gaussian sketches. We then propose a new sketching technique, called LEverage Score Sparsified (LESS) embeddings, which uses ideas from both data-oblivious sparse embeddings as well as data-aware leverage-based row sampling methods, to get $\epsilon$ inversion bias for sketch size $m=O(d\log d+\sqrt d/\epsilon)$ in time $O(\text{nnz}(A)\log n+md^2)$, where nnz is the number of non-zeros. The key techniques enabling our analysis include an extension of a classical inequality of Bai and Silverstein for random quadratic forms, which we call the Restricted Bai-Silverstein inequality; and anti-concentration of the Binomial distribution via the Paley-Zygmund inequality, which we use to prove a lower bound showing that leverage score sampling sketches generally do not achieve small inversion bias.

Viaarxiv icon

Kernel regression in high dimension: Refined analysis beyond double descent

Oct 06, 2020
Fanghui Liu, Zhenyu Liao, Johan A. K. Suykens

Figure 1 for Kernel regression in high dimension: Refined analysis beyond double descent
Figure 2 for Kernel regression in high dimension: Refined analysis beyond double descent
Figure 3 for Kernel regression in high dimension: Refined analysis beyond double descent
Figure 4 for Kernel regression in high dimension: Refined analysis beyond double descent

In this paper, we provide a precise characterize of generalization properties of high dimensional kernel ridge regression across the under- and over-parameterized regimes, depending on whether the number of training data $n$ exceeds the feature dimension $d$. By establishing a novel bias-variance decomposition of the expected excess risk, we show that, while the bias is independent of $d$ and monotonically decreases with $n$, the variance depends on $n,d$ and can be unimodal or monotonically decreasing under different regularization schemes. Our refined analysis goes beyond the double descent theory by showing that, depending on the data eigen-profile and the level of regularization, the kernel regression risk curve can be a double-descent-like, bell-shaped, or monotonic function of $n$. Experiments on synthetic and real data are conducted to support our theoretical findings.

* 30 pages, 13 figures 
Viaarxiv icon