Alert button
Picture for Chenglong Bao

Chenglong Bao

Alert button

Addressing preferred orientation in single-particle cryo-EM through AI-generated auxiliary particles

Sep 26, 2023
Hui Zhang, Dihan Zheng, Qiurong Wu, Nieng Yan, Zuoqiang Shi, Mingxu Hu, Chenglong Bao

The single-particle cryo-EM field faces the persistent challenge of preferred orientation, lacking general computational solutions. We introduce cryoPROS, an AI-based approach designed to address the above issue. By generating the auxiliary particles with a conditional deep generative model, cryoPROS addresses the intrinsic bias in orientation estimation for the observed particles. We effectively employed cryoPROS in the cryo-EM single particle analysis of the hemagglutinin trimer, showing the ability to restore the near-atomic resolution structure on non-tilt data. Moreover, the enhanced version named cryoPROS-MP significantly improves the resolution of the membrane protein NaX using the no-tilted data that contains the effects of micelles. Compared to the classical approaches, cryoPROS does not need special experimental or image acquisition techniques, providing a purely computational yet effective solution for the preferred orientation problem. Finally, we conduct extensive experiments that establish the low risk of model bias and the high robustness of cryoPROS.

Viaarxiv icon

An axiomatized PDE model of deep neural networks

Jul 23, 2023
Tangjun Wang, Wenqi Tao, Chenglong Bao, Zuoqiang Shi

Inspired by the relation between deep neural network (DNN) and partial differential equations (PDEs), we study the general form of the PDE models of deep neural networks. To achieve this goal, we formulate DNN as an evolution operator from a simple base model. Based on several reasonable assumptions, we prove that the evolution operator is actually determined by convection-diffusion equation. This convection-diffusion equation model gives mathematical explanation for several effective networks. Moreover, we show that the convection-diffusion model improves the robustness and reduces the Rademacher complexity. Based on the convection-diffusion equation, we design a new training method for ResNets. Experiments validate the performance of the proposed method.

Viaarxiv icon

Semi-Supervised Clustering via Dynamic Graph Structure Learning

Sep 06, 2022
Huaming Ling, Chenglong Bao, Xin Liang, Zuoqiang Shi

Figure 1 for Semi-Supervised Clustering via Dynamic Graph Structure Learning
Figure 2 for Semi-Supervised Clustering via Dynamic Graph Structure Learning
Figure 3 for Semi-Supervised Clustering via Dynamic Graph Structure Learning
Figure 4 for Semi-Supervised Clustering via Dynamic Graph Structure Learning

Most existing semi-supervised graph-based clustering methods exploit the supervisory information by either refining the affinity matrix or directly constraining the low-dimensional representations of data points. The affinity matrix represents the graph structure and is vital to the performance of semi-supervised graph-based clustering. However, existing methods adopt a static affinity matrix to learn the low-dimensional representations of data points and do not optimize the affinity matrix during the learning process. In this paper, we propose a novel dynamic graph structure learning method for semi-supervised clustering. In this method, we simultaneously optimize the affinity matrix and the low-dimensional representations of data points by leveraging the given pairwise constraints. Moreover, we propose an alternating minimization approach with proven convergence to solve the proposed nonconvex model. During the iteration process, our method cyclically updates the low-dimensional representations of data points and refines the affinity matrix, leading to a dynamic affinity matrix (graph structure). Specifically, for the update of the affinity matrix, we enforce the data points with remarkably different low-dimensional representations to have an affinity value of 0. Furthermore, we construct the initial affinity matrix by integrating the local distance and global self-representation among data points. Experimental results on eight benchmark datasets under different settings show the advantages of the proposed approach.

Viaarxiv icon

Convergence Rates of Training Deep Neural Networks via Alternating Minimization Methods

Aug 30, 2022
Jintao Xu, Chenglong Bao, Wenxun Xing

Training deep neural networks (DNNs) is an important and challenging optimization problem in machine learning due to its non-convexity and non-separable structure. The alternating minimization (AM) approaches split the composition structure of DNNs and have drawn great interest in the deep learning and optimization communities. In this paper, we propose a unified framework for analyzing the convergence rate of AM-type network training methods. Our analysis are based on the $j$-step sufficient decrease conditions and the Kurdyka-Lojasiewicz (KL) property, which relaxes the requirement of designing descent algorithms. We show the detailed local convergence rate if the KL exponent $\theta$ varies in $[0,1)$. Moreover, the local R-linear convergence is discussed under a stronger $j$-step sufficient decrease condition.

Viaarxiv icon

A scalable deep learning approach for solving high-dimensional dynamic optimal transport

May 16, 2022
Wei Wan, Yuejin Zhang, Chenglong Bao, Bin Dong, Zuoqiang Shi

Figure 1 for A scalable deep learning approach for solving high-dimensional dynamic optimal transport
Figure 2 for A scalable deep learning approach for solving high-dimensional dynamic optimal transport
Figure 3 for A scalable deep learning approach for solving high-dimensional dynamic optimal transport
Figure 4 for A scalable deep learning approach for solving high-dimensional dynamic optimal transport

The dynamic formulation of optimal transport has attracted growing interests in scientific computing and machine learning, and its computation requires to solve a PDE-constrained optimization problem. The classical Eulerian discretization based approaches suffer from the curse of dimensionality, which arises from the approximation of high-dimensional velocity field. In this work, we propose a deep learning based method to solve the dynamic optimal transport in high dimensional space. Our method contains three main ingredients: a carefully designed representation of the velocity field, the discretization of the PDE constraint along the characteristics, and the computation of high dimensional integral by Monte Carlo method in each time step. Specifically, in the representation of the velocity field, we apply the classical nodal basis function in time and the deep neural networks in space domain with the H1-norm regularization. This technique promotes the regularity of the velocity field in both time and space such that the discretization along the characteristic remains to be stable during the training process. Extensive numerical examples have been conducted to test the proposed method. Compared to other solvers of optimal transport, our method could give more accurate results in high dimensional cases and has very good scalability with respect to dimension. Finally, we extend our method to more complicated cases such as crowd motion problem.

Viaarxiv icon

Learn from Unpaired Data for Image Restoration: A Variational Bayes Approach

Apr 21, 2022
Dihan Zheng, Xiaowen Zhang, Kaisheng Ma, Chenglong Bao

Figure 1 for Learn from Unpaired Data for Image Restoration: A Variational Bayes Approach
Figure 2 for Learn from Unpaired Data for Image Restoration: A Variational Bayes Approach
Figure 3 for Learn from Unpaired Data for Image Restoration: A Variational Bayes Approach
Figure 4 for Learn from Unpaired Data for Image Restoration: A Variational Bayes Approach

Collecting paired training data is difficult in practice, but the unpaired samples broadly exist. Current approaches aim at generating synthesized training data from the unpaired samples by exploring the relationship between the corrupted and clean data. This work proposes LUD-VAE, a deep generative method to learn the joint probability density function from data sampled from marginal distributions. Our approach is based on a carefully designed probabilistic graphical model in which the clean and corrupted data domains are conditionally independent. Using variational inference, we maximize the evidence lower bound (ELBO) to estimate the joint probability density function. Furthermore, we show that the ELBO is computable without paired samples under the inference invariant assumption. This property provides the mathematical rationale of our approach in the unpaired setting. Finally, we apply our method to real-world image denoising and super-resolution tasks and train the models using the synthetic data generated by the LUD-VAE. Experimental results validate the advantages of our method over other learnable approaches.

Viaarxiv icon

Unsupervised Deep Learning Meets Chan-Vese Model

Apr 14, 2022
Dihan Zheng, Chenglong Bao, Zuoqiang Shi, Haibin Ling, Kaisheng Ma

Figure 1 for Unsupervised Deep Learning Meets Chan-Vese Model
Figure 2 for Unsupervised Deep Learning Meets Chan-Vese Model
Figure 3 for Unsupervised Deep Learning Meets Chan-Vese Model
Figure 4 for Unsupervised Deep Learning Meets Chan-Vese Model

The Chan-Vese (CV) model is a classic region-based method in image segmentation. However, its piecewise constant assumption does not always hold for practical applications. Many improvements have been proposed but the issue is still far from well solved. In this work, we propose an unsupervised image segmentation approach that integrates the CV model with deep neural networks, which significantly improves the original CV model's segmentation accuracy. Our basic idea is to apply a deep neural network that maps the image into a latent space to alleviate the violation of the piecewise constant assumption in image space. We formulate this idea under the classic Bayesian framework by approximating the likelihood with an evidence lower bound (ELBO) term while keeping the prior term in the CV model. Thus, our model only needs the input image itself and does not require pre-training from external datasets. Moreover, we extend the idea to multi-phase case and dataset based unsupervised image segmentation. Extensive experiments validate the effectiveness of our model and show that the proposed method is noticeably better than other unsupervised segmentation approaches.

Viaarxiv icon

AFEC: Active Forgetting of Negative Transfer in Continual Learning

Nov 04, 2021
Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, Yi Zhong

Figure 1 for AFEC: Active Forgetting of Negative Transfer in Continual Learning
Figure 2 for AFEC: Active Forgetting of Negative Transfer in Continual Learning
Figure 3 for AFEC: Active Forgetting of Negative Transfer in Continual Learning
Figure 4 for AFEC: Active Forgetting of Negative Transfer in Continual Learning

Continual learning aims to learn a sequence of tasks from dynamic data distributions. Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative. If the old knowledge interferes with the learning of a new task, i.e., the forward knowledge transfer is negative, then precisely remembering the old tasks will further aggravate the interference, thus decreasing the performance of continual learning. By contrast, biological neural networks can actively forget the old knowledge that conflicts with the learning of a new experience, through regulating the learning-triggered synaptic expansion and synaptic convergence. Inspired by the biological active forgetting, we propose to actively forget the old knowledge that limits the learning of new tasks to benefit continual learning. Under the framework of Bayesian continual learning, we develop a novel approach named Active Forgetting with synaptic Expansion-Convergence (AFEC). Our method dynamically expands parameters to learn each new task and then selectively combines them, which is formally consistent with the underlying mechanism of biological active forgetting. We extensively evaluate AFEC on a variety of continual learning benchmarks, including CIFAR-10 regression tasks, visual classification tasks and Atari reinforcement tasks, where AFEC effectively improves the learning of new tasks and achieves the state-of-the-art performance in a plug-and-play way.

* 35th Conference on Neural Information Processing Systems (NeurIPS 2021)  
Viaarxiv icon

Stochastic Anderson Mixing for Nonconvex Stochastic Optimization

Oct 04, 2021
Fuchao Wei, Chenglong Bao, Yang Liu

Figure 1 for Stochastic Anderson Mixing for Nonconvex Stochastic Optimization
Figure 2 for Stochastic Anderson Mixing for Nonconvex Stochastic Optimization
Figure 3 for Stochastic Anderson Mixing for Nonconvex Stochastic Optimization
Figure 4 for Stochastic Anderson Mixing for Nonconvex Stochastic Optimization

Anderson mixing (AM) is an acceleration method for fixed-point iterations. Despite its success and wide usage in scientific computing, the convergence theory of AM remains unclear, and its applications to machine learning problems are not well explored. In this paper, by introducing damped projection and adaptive regularization to classical AM, we propose a Stochastic Anderson Mixing (SAM) scheme to solve nonconvex stochastic optimization problems. Under mild assumptions, we establish the convergence theory of SAM, including the almost sure convergence to stationary points and the worst-case iteration complexity. Moreover, the complexity bound can be improved when randomly choosing an iterate as the output. To further accelerate the convergence, we incorporate a variance reduction technique into the proposed SAM. We also propose a preconditioned mixing strategy for SAM which can empirically achieve faster convergence or better generalization ability. Finally, we apply the SAM method to train various neural networks including the vanilla CNN, ResNets, WideResNet, ResNeXt, DenseNet and RNN. Experimental results on image classification and language model demonstrate the advantages of our method.

* Accepted by the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) 
Viaarxiv icon