Alert button
Picture for Lam M. Nguyen

Lam M. Nguyen

Alert button

Batch Clipping and Adaptive Layerwise Clipping for Differential Private Stochastic Gradient Descent

Jul 21, 2023
Toan N. Nguyen, Phuong Ha Nguyen, Lam M. Nguyen, Marten Van Dijk

Figure 1 for Batch Clipping and Adaptive Layerwise Clipping for Differential Private Stochastic Gradient Descent
Figure 2 for Batch Clipping and Adaptive Layerwise Clipping for Differential Private Stochastic Gradient Descent
Figure 3 for Batch Clipping and Adaptive Layerwise Clipping for Differential Private Stochastic Gradient Descent
Figure 4 for Batch Clipping and Adaptive Layerwise Clipping for Differential Private Stochastic Gradient Descent

Each round in Differential Private Stochastic Gradient Descent (DPSGD) transmits a sum of clipped gradients obfuscated with Gaussian noise to a central server which uses this to update a global model which often represents a deep neural network. Since the clipped gradients are computed separately, which we call Individual Clipping (IC), deep neural networks like resnet-18 cannot use Batch Normalization Layers (BNL) which is a crucial component in deep neural networks for achieving a high accuracy. To utilize BNL, we introduce Batch Clipping (BC) where, instead of clipping single gradients as in the orginal DPSGD, we average and clip batches of gradients. Moreover, the model entries of different layers have different sensitivities to the added Gaussian noise. Therefore, Adaptive Layerwise Clipping methods (ALC), where each layer has its own adaptively finetuned clipping constant, have been introduced and studied, but so far without rigorous DP proofs. In this paper, we propose {\em a new ALC and provide rigorous DP proofs for both BC and ALC}. Experiments show that our modified DPSGD with BC and ALC for CIFAR-$10$ with resnet-$18$ converges while DPSGD with IC and ALC does not.

* 20 pages, 18 Figures 
Viaarxiv icon

Learning Robust and Consistent Time Series Representations: A Dilated Inception-Based Approach

Jun 11, 2023
Anh Duy Nguyen, Trang H. Tran, Hieu H. Pham, Phi Le Nguyen, Lam M. Nguyen

Figure 1 for Learning Robust and Consistent Time Series Representations: A Dilated Inception-Based Approach
Figure 2 for Learning Robust and Consistent Time Series Representations: A Dilated Inception-Based Approach
Figure 3 for Learning Robust and Consistent Time Series Representations: A Dilated Inception-Based Approach
Figure 4 for Learning Robust and Consistent Time Series Representations: A Dilated Inception-Based Approach

Representation learning for time series has been an important research area for decades. Since the emergence of the foundation models, this topic has attracted a lot of attention in contrastive self-supervised learning, to solve a wide range of downstream tasks. However, there have been several challenges for contrastive time series processing. First, there is no work considering noise, which is one of the critical factors affecting the efficacy of time series tasks. Second, there is a lack of efficient yet lightweight encoder architectures that can learn informative representations robust to various downstream tasks. To fill in these gaps, we initiate a novel sampling strategy that promotes consistent representation learning with the presence of noise in natural time series. In addition, we propose an encoder architecture that utilizes dilated convolution within the Inception block to create a scalable and robust network architecture with a wide receptive field. Experiments demonstrate that our method consistently outperforms state-of-the-art methods in forecasting, classification, and abnormality detection tasks, e.g. ranks first over two-thirds of the classification UCR datasets, with only $40\%$ of the parameters compared to the second-best approach. Our source code for CoInception framework is accessible at https://github.com/anhduy0911/CoInception.

Viaarxiv icon

An End-to-End Time Series Model for Simultaneous Imputation and Forecast

Jun 01, 2023
Trang H. Tran, Lam M. Nguyen, Kyongmin Yeo, Nam Nguyen, Dzung Phan, Roman Vaculin, Jayant Kalagnanam

Figure 1 for An End-to-End Time Series Model for Simultaneous Imputation and Forecast
Figure 2 for An End-to-End Time Series Model for Simultaneous Imputation and Forecast
Figure 3 for An End-to-End Time Series Model for Simultaneous Imputation and Forecast
Figure 4 for An End-to-End Time Series Model for Simultaneous Imputation and Forecast

Time series forecasting using historical data has been an interesting and challenging topic, especially when the data is corrupted by missing values. In many industrial problem, it is important to learn the inference function between the auxiliary observations and target variables as it provides additional knowledge when the data is not fully observed. We develop an end-to-end time series model that aims to learn the such inference relation and make a multiple-step ahead forecast. Our framework trains jointly two neural networks, one to learn the feature-wise correlations and the other for the modeling of temporal behaviors. Our model is capable of simultaneously imputing the missing entries and making a multiple-step ahead prediction. The experiments show good overall performance of our framework over existing methods in both imputation and forecasting tasks.

Viaarxiv icon

Label-Free Concept Bottleneck Models

Apr 12, 2023
Tuomas Oikarinen, Subhro Das, Lam M. Nguyen, Tsui-Wei Weng

Figure 1 for Label-Free Concept Bottleneck Models
Figure 2 for Label-Free Concept Bottleneck Models
Figure 3 for Label-Free Concept Bottleneck Models
Figure 4 for Label-Free Concept Bottleneck Models

Concept bottleneck models (CBM) are a popular way of creating more interpretable neural networks by having hidden layer neurons correspond to human-understandable concepts. However, existing CBMs and their variants have two crucial limitations: first, they need to collect labeled data for each of the predefined concepts, which is time consuming and labor intensive; second, the accuracy of a CBM is often significantly lower than that of a standard neural network, especially on more complex datasets. This poor performance creates a barrier for adopting CBMs in practical real world applications. Motivated by these challenges, we propose Label-free CBM which is a novel framework to transform any neural network into an interpretable CBM without labeled concept data, while retaining a high accuracy. Our Label-free CBM has many advantages, it is: scalable - we present the first CBM scaled to ImageNet, efficient - creating a CBM takes only a few hours even for very large datasets, and automated - training it for a new dataset requires minimal human effort. Our code is available at https://github.com/Trustworthy-ML-Lab/Label-free-CBM.

* Published at ICLR 2023 
Viaarxiv icon

ConCerNet: A Contrastive Learning Based Framework for Automated Conservation Law Discovery and Trustworthy Dynamical System Prediction

Feb 11, 2023
Wang Zhang, Tsui-Wei Weng, Subhro Das, Alexandre Megretski, Luca Daniel, Lam M. Nguyen

Figure 1 for ConCerNet: A Contrastive Learning Based Framework for Automated Conservation Law Discovery and Trustworthy Dynamical System Prediction
Figure 2 for ConCerNet: A Contrastive Learning Based Framework for Automated Conservation Law Discovery and Trustworthy Dynamical System Prediction
Figure 3 for ConCerNet: A Contrastive Learning Based Framework for Automated Conservation Law Discovery and Trustworthy Dynamical System Prediction
Figure 4 for ConCerNet: A Contrastive Learning Based Framework for Automated Conservation Law Discovery and Trustworthy Dynamical System Prediction

Deep neural networks (DNN) have shown great capacity of modeling a dynamical system; nevertheless, they usually do not obey physics constraints such as conservation laws. This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling to endow the invariant properties. ConCerNet consists of two steps: (i) a contrastive learning method to automatically capture the system invariants (i.e. conservation properties) along the trajectory observations; (ii) a neural projection layer to guarantee that the learned dynamics models preserve the learned invariants. We theoretically prove the functional relationship between the learned latent representation and the unknown system invariant function. Experiments show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics by a large margin. With neural network based parameterization and no dependence on prior knowledge, our method can be extended to complex and large-scale dynamics by leveraging an autoencoder.

* 22 pages, 7 figures 
Viaarxiv icon

Generalizing DP-SGD with Shuffling and Batching Clipping

Dec 12, 2022
Marten van Dijk, Phuong Ha Nguyen, Toan N. Nguyen, Lam M. Nguyen

Figure 1 for Generalizing DP-SGD with Shuffling and Batching Clipping

Classical differential private DP-SGD implements individual clipping with random subsampling, which forces a mini-batch SGD approach. We provide a general differential private algorithmic framework that goes beyond DP-SGD and allows any possible first order optimizers (e.g., classical SGD and momentum based SGD approaches) in combination with batch clipping, which clips an aggregate of computed gradients rather than summing clipped gradients (as is done in individual clipping). The framework also admits sampling techniques beyond random subsampling such as shuffling. Our DP analysis follows the $f$-DP approach and introduces a new proof technique which allows us to also analyse group privacy. In particular, for $E$ epochs work and groups of size $g$, we show a $\sqrt{g E}$ DP dependency for batch clipping with shuffling. This is much better than the previously anticipated linear dependency in $g$ and is much better than the previously expected square root dependency on the total number of rounds within $E$ epochs which is generally much more than $\sqrt{E}$.

* 38 pages, 0 figure 
Viaarxiv icon

Finding Optimal Policy for Queueing Models: New Parameterization

Jun 21, 2022
Trang H. Tran, Lam M. Nguyen, Katya Scheinberg

Figure 1 for Finding Optimal Policy for Queueing Models: New Parameterization
Figure 2 for Finding Optimal Policy for Queueing Models: New Parameterization
Figure 3 for Finding Optimal Policy for Queueing Models: New Parameterization
Figure 4 for Finding Optimal Policy for Queueing Models: New Parameterization

Queueing systems appear in many important real-life applications including communication networks, transportation and manufacturing systems. Reinforcement learning (RL) framework is a suitable model for the queueing control problem where the underlying dynamics are usually unknown and the agent receives little information from the environment to navigate. In this work, we investigate the optimization aspects of the queueing model as a RL environment and provide insight to learn the optimal policy efficiently. We propose a new parameterization of the policy by using the intrinsic properties of queueing network systems. Experiments show good performance of our methods with various load conditions from light to heavy traffic.

Viaarxiv icon

On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms

Jun 13, 2022
Lam M. Nguyen, Trang H. Tran

Figure 1 for On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Figure 2 for On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Figure 3 for On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms

Stochastic gradient descent (SGD) algorithm is the method of choice in many machine learning tasks thanks to its scalability and efficiency in dealing with large-scale problems. In this paper, we focus on the shuffling version of SGD which matches the mainstream practical heuristics. We show the convergence to a global solution of shuffling SGD for a class of non-convex functions under over-parameterized settings. Our analysis employs more relaxed non-convex assumptions than previous literature. Nevertheless, we maintain the desired computational complexity as shuffling SGD has achieved in the general convex setting.

* 19 pages, 1 figure 
Viaarxiv icon

On the Convergence of Gradient Extrapolation Methods for Unbalanced Optimal Transport

Feb 08, 2022
Quang Minh Nguyen, Hoang H. Nguyen, Yi Zhou, Lam M. Nguyen

Figure 1 for On the Convergence of Gradient Extrapolation Methods for Unbalanced Optimal Transport
Figure 2 for On the Convergence of Gradient Extrapolation Methods for Unbalanced Optimal Transport
Figure 3 for On the Convergence of Gradient Extrapolation Methods for Unbalanced Optimal Transport
Figure 4 for On the Convergence of Gradient Extrapolation Methods for Unbalanced Optimal Transport

We study the Unbalanced Optimal Transport (UOT) between two measures of possibly different masses with at most $n$ components, where marginal constraints of the standard Optimal Transport (OT) are relaxed via Kullback-Leibler divergence with regularization factor $\tau$. We propose a novel algorithm based on Gradient Extrapolation Method (GEM-UOT) to find an $\varepsilon$-approximate solution to the UOT problem in $O\big( \kappa n^2 \log\big(\frac{\tau n}{\varepsilon}\big) \big)$, where $\kappa$ is the condition number depending on only the two input measures. Compared to the only known complexity ${O}\big(\tfrac{\tau n^2 \log(n)}{\varepsilon} \log\big(\tfrac{\log(n)}{{\varepsilon}}\big)\big)$ for solving the UOT problem via the Sinkhorn algorithm, ours is better in $\varepsilon$ and lifts Sinkhorn's linear dependence on $\tau$, which hindered its practicality to approximate the standard OT via UOT. Our proof technique is based on a novel dual formulation of the squared $\ell_2$-norm regularized UOT objective, which is of independent interest and also leads to a new characterization of approximation error between UOT and OT in terms of both the transportation plan and transport distance. To this end, we further present an algorithm, based on GEM-UOT with fine tuned $\tau$ and a post-process projection step, to find an $\varepsilon$-approximate solution to the standard OT problem in $O\big( \kappa n^2 \log\big(\frac{ n}{\varepsilon}\big) \big)$, which is a new complexity in the literature of OT. Extensive experiments on synthetic and real datasets validate our theories and demonstrate the favorable performance of our methods in practice.

Viaarxiv icon

Evaluating Robustness of Cooperative MARL: A Model-based Approach

Feb 07, 2022
Nhan H. Pham, Lam M. Nguyen, Jie Chen, Hoang Thanh Lam, Subhro Das, Tsui-Wei Weng

Figure 1 for Evaluating Robustness of Cooperative MARL: A Model-based Approach
Figure 2 for Evaluating Robustness of Cooperative MARL: A Model-based Approach
Figure 3 for Evaluating Robustness of Cooperative MARL: A Model-based Approach
Figure 4 for Evaluating Robustness of Cooperative MARL: A Model-based Approach

In recent years, a proliferation of methods were developed for cooperative multi-agent reinforcement learning (c-MARL). However, the robustness of c-MARL agents against adversarial attacks has been rarely explored. In this paper, we propose to evaluate the robustness of c-MARL agents via a model-based approach. Our proposed formulation can craft stronger adversarial state perturbations of c-MARL agents(s) to lower total team rewards more than existing model-free approaches. In addition, we propose the first victim-agent selection strategy which allows us to develop even stronger adversarial attack. Numerical experiments on multi-agent MuJoCo benchmarks illustrate the advantage of our approach over other baselines. The proposed model-based attack consistently outperforms other baselines in all tested environments.

Viaarxiv icon