Alert button
Picture for Shengdong Zhang

Shengdong Zhang

Alert button

Stochastic Whitening Batch Normalization

Jun 03, 2021
Shengdong Zhang, Ehsan Nezhadarya, Homa Fashandi, Jiayi Liu, Darin Graham, Mohak Shah

Figure 1 for Stochastic Whitening Batch Normalization
Figure 2 for Stochastic Whitening Batch Normalization
Figure 3 for Stochastic Whitening Batch Normalization
Figure 4 for Stochastic Whitening Batch Normalization

Batch Normalization (BN) is a popular technique for training Deep Neural Networks (DNNs). BN uses scaling and shifting to normalize activations of mini-batches to accelerate convergence and improve generalization. The recently proposed Iterative Normalization (IterNorm) method improves these properties by whitening the activations iteratively using Newton's method. However, since Newton's method initializes the whitening matrix independently at each training step, no information is shared between consecutive steps. In this work, instead of exact computation of whitening matrix at each time step, we estimate it gradually during training in an online fashion, using our proposed Stochastic Whitening Batch Normalization (SWBN) algorithm. We show that while SWBN improves the convergence rate and generalization of DNNs, its computational overhead is less than that of IterNorm. Due to the high efficiency of the proposed method, it can be easily employed in most DNN architectures with a large number of layers. We provide comprehensive experiments and comparisons between BN, IterNorm, and SWBN layers to demonstrate the effectiveness of the proposed technique in conventional (many-shot) image classification and few-shot classification tasks.

* Accepted to the Main Conference of CVPR 2021 
Viaarxiv icon

Analysis of tunnel failure characteristics under multiple explosion loads based on persistent homology-based machine learning

Sep 19, 2020
Shengdong Zhang, Shihui You, Longfei Chen, Xiaofei Liu

Figure 1 for Analysis of tunnel failure characteristics under multiple explosion loads based on persistent homology-based machine learning
Figure 2 for Analysis of tunnel failure characteristics under multiple explosion loads based on persistent homology-based machine learning
Figure 3 for Analysis of tunnel failure characteristics under multiple explosion loads based on persistent homology-based machine learning

The study of tunnel failure characteristics under the load of external explosion source is an important problem in tunnel design and protection, in particular, it is of great significance to construct an intelligent topological feature description of the tunnel failure process. The failure characteristics of tunnels under explosive loading are described by using discrete element method and persistent homology-based machine learning. Firstly, the discrete element model of shallow buried tunnel was established in the discrete element software, and the explosive load was equivalent to a series of uniformly distributed loads acting on the surface by Saint-Venant principle, and the dynamic response of the tunnel under multiple explosive loads was obtained through iterative calculation. The topological characteristics of surrounding rock is studied by persistent homology-based machine learning. The geometric, physical and interunit characteristics of the tunnel subjected to explosive loading are extracted, and the nonlinear mapping relationship between the topological quantity of persistent homology, and the failure characteristics of the surrounding rock is established, and the results of the intelligent description of the failure characteristics of the tunnel are obtained. The research shows that the length of the longest Betty 1 bar code is closely related to the stability of the tunnel, which can be used for effective early warning of the tunnel failure, and an intelligent description of the tunnel failure process can be established to provide a new idea for tunnel engineering protection.

Viaarxiv icon

From CDF to PDF --- A Density Estimation Method for High Dimensional Data

Apr 15, 2018
Shengdong Zhang

Figure 1 for From CDF to PDF --- A Density Estimation Method for High Dimensional Data
Figure 2 for From CDF to PDF --- A Density Estimation Method for High Dimensional Data
Figure 3 for From CDF to PDF --- A Density Estimation Method for High Dimensional Data
Figure 4 for From CDF to PDF --- A Density Estimation Method for High Dimensional Data

CDF2PDF is a method of PDF estimation by approximating CDF. The original idea of it was previously proposed in [1] called SIC. However, SIC requires additional hyper-parameter tunning, and no algorithms for computing higher order derivative from a trained NN are provided in [1]. CDF2PDF improves SIC by avoiding the time-consuming hyper-parameter tuning part and enabling higher order derivative computation to be done in polynomial time. Experiments of this method for one-dimensional data shows promising results.

Viaarxiv icon

Deep Symbolic Representation Learning for Heterogeneous Time-series Classification

Dec 05, 2016
Shengdong Zhang, Soheil Bahrampour, Naveen Ramakrishnan, Mohak Shah

Figure 1 for Deep Symbolic Representation Learning for Heterogeneous Time-series Classification
Figure 2 for Deep Symbolic Representation Learning for Heterogeneous Time-series Classification
Figure 3 for Deep Symbolic Representation Learning for Heterogeneous Time-series Classification
Figure 4 for Deep Symbolic Representation Learning for Heterogeneous Time-series Classification

In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables. The complex temporal dependencies between the variables combined with sparsity of the data makes the event classification problem particularly challenging. Most state-of-art approaches address this either by designing hand-engineered features or breaking up the problem over homogeneous variates. In this work, we propose and compare three representation learning algorithms over symbolized sequences which enables classification of heterogeneous time-series data using a deep architecture. The proposed representations are trained jointly along with the rest of the network architecture in an end-to-end fashion that makes the learned features discriminative for the given task. Experiments on three real-world datasets demonstrate the effectiveness of the proposed approaches.

Viaarxiv icon