Alert button
Picture for Thanh Nguyen-Tang

Thanh Nguyen-Tang

Alert button

A Cosine Similarity-based Method for Out-of-Distribution Detection

Jun 23, 2023
Nguyen Ngoc-Hieu, Nguyen Hung-Quang, The-Anh Ta, Thanh Nguyen-Tang, Khoa D Doan, Hoang Thanh-Tung

Figure 1 for A Cosine Similarity-based Method for Out-of-Distribution Detection
Figure 2 for A Cosine Similarity-based Method for Out-of-Distribution Detection
Figure 3 for A Cosine Similarity-based Method for Out-of-Distribution Detection
Figure 4 for A Cosine Similarity-based Method for Out-of-Distribution Detection

The ability to detect OOD data is a crucial aspect of practical machine learning applications. In this work, we show that cosine similarity between the test feature and the typical ID feature is a good indicator of OOD data. We propose Class Typical Matching (CTM), a post hoc OOD detection algorithm that uses a cosine similarity scoring function. Extensive experiments on multiple benchmarks show that CTM outperforms existing post hoc OOD detection methods.

* Accepted paper at ICML 2023 Workshop on Spurious Correlations, Invariance, and Stability. 10 pages (4 main + appendix) 
Viaarxiv icon

VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation

Mar 04, 2023
Thanh Nguyen-Tang, Raman Arora

Figure 1 for VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation
Figure 2 for VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation
Figure 3 for VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation
Figure 4 for VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation

We propose a novel algorithm for offline reinforcement learning called Value Iteration with Perturbed Rewards (VIPeR), which amalgamates the pessimism principle with random perturbations of the value function. Most current offline RL algorithms explicitly construct statistical confidence regions to obtain pessimism via lower confidence bounds (LCB), which cannot easily scale to complex problems where a neural network is used to estimate the value functions. Instead, VIPeR implicitly obtains pessimism by simply perturbing the offline data multiple times with carefully-designed i.i.d. Gaussian noises to learn an ensemble of estimated state-action {value functions} and acting greedily with respect to the minimum of the ensemble. The estimated state-action values are obtained by fitting a parametric model (e.g., neural networks) to the perturbed datasets using gradient descent. As a result, VIPeR only needs $\mathcal{O}(1)$ time complexity for action selection, while LCB-based algorithms require at least $\Omega(K^2)$, where $K$ is the total number of trajectories in the offline data. We also propose a novel data-splitting technique that helps remove a factor involving the log of the covering number in our bound. We prove that VIPeR yields a provable uncertainty quantifier with overparameterized neural networks and enjoys a bound on sub-optimality of $\tilde{\mathcal{O}}( { \kappa H^{5/2} \tilde{d} }/{\sqrt{K}})$, where $\tilde{d}$ is the effective dimension, $H$ is the horizon length and $\kappa$ measures the distributional shift. We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation on a wide set of synthetic and real-world datasets. To the best of our knowledge, VIPeR is the first algorithm for offline RL that is provably efficient for general Markov decision processes (MDPs) with neural network function approximation.

* top-25%-noble ICLR'23; code: https://github.com/thanhnguyentang/neural-offline-rl; v2: change title 
Viaarxiv icon

Provably Efficient Neural Offline Reinforcement Learning via Perturbed Rewards

Feb 24, 2023
Thanh Nguyen-Tang, Raman Arora

Figure 1 for Provably Efficient Neural Offline Reinforcement Learning via Perturbed Rewards
Figure 2 for Provably Efficient Neural Offline Reinforcement Learning via Perturbed Rewards
Figure 3 for Provably Efficient Neural Offline Reinforcement Learning via Perturbed Rewards
Figure 4 for Provably Efficient Neural Offline Reinforcement Learning via Perturbed Rewards

We propose a novel offline reinforcement learning (RL) algorithm, namely Value Iteration with Perturbed Rewards (VIPeR) which amalgamates the randomized value function idea with the pessimism principle. Most current offline RL algorithms explicitly construct statistical confidence regions to obtain pessimism via lower confidence bounds (LCB), which cannot easily scale to complex problems where a neural network is used to estimate the value functions. Instead, VIPeR implicitly obtains pessimism by simply perturbing the offline data multiple times with carefully-designed i.i.d Gaussian noises to learn an ensemble of estimated state-action values and acting greedily to the minimum of the ensemble. The estimated state-action values are obtained by fitting a parametric model (e.g. neural networks) to the perturbed datasets using gradient descent. As a result, VIPeR only needs $\mathcal{O}(1)$ time complexity for action selection while LCB-based algorithms require at least $\Omega(K^2)$, where $K$ is the total number of trajectories in the offline data. We also propose a novel data splitting technique that helps remove the potentially large log covering number in the learning bound. We prove that VIPeR yields a provable uncertainty quantifier with overparameterized neural networks and achieves an $\tilde{\mathcal{O}}\left( \frac{ \kappa H^{5/2} \tilde{d} }{\sqrt{K}} \right)$ sub-optimality where $\tilde{d}$ is the effective dimension, $H$ is the horizon length and $\kappa$ measures the distributional shift. We corroborate the statistical and computational efficiency of VIPeR with an empirical evaluation in a wide set of synthetic and real-world datasets. To the best of our knowledge, VIPeR is the first offline RL algorithm that is both provably and computationally efficient in general Markov decision processes (MDPs) with neural network function approximation.

* top-25%-noble ICLR'23; code: https://github.com/thanhnguyentang/neural-offline-rl 
Viaarxiv icon

On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation

Nov 23, 2022
Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora

Figure 1 for On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
Figure 2 for On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
Figure 3 for On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
Figure 4 for On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation

Sample-efficient offline reinforcement learning (RL) with linear function approximation has recently been studied extensively. Much of prior work has yielded the minimax-optimal bound of $\tilde{\mathcal{O}}(\frac{1}{\sqrt{K}})$, with $K$ being the number of episodes in the offline data. In this work, we seek to understand instance-dependent bounds for offline RL with function approximation. We present an algorithm called Bootstrapped and Constrained Pessimistic Value Iteration (BCP-VI), which leverages data bootstrapping and constrained optimization on top of pessimism. We show that under a partial data coverage assumption, that of \emph{concentrability} with respect to an optimal policy, the proposed algorithm yields a fast rate of $\tilde{\mathcal{O}}(\frac{1}{K})$ for offline RL when there is a positive gap in the optimal Q-value functions, even when the offline data were adaptively collected. Moreover, when the linear features of the optimal actions in the states reachable by an optimal policy span those reachable by the behavior policy and the optimal actions are unique, offline RL achieves absolute zero sub-optimality error when $K$ exceeds a (finite) instance-dependent threshold. To the best of our knowledge, these are the first $\tilde{\mathcal{O}}(\frac{1}{K})$ bound and absolute zero sub-optimality bound respectively for offline RL with linear function approximation from adaptive data with partial coverage. We also provide instance-agnostic and instance-dependent information-theoretical lower bounds to complement our upper bounds.

* AAAI'23 
Viaarxiv icon

Two-Stage Neural Contextual Bandits for Personalised News Recommendation

Jun 26, 2022
Mengyan Zhang, Thanh Nguyen-Tang, Fangzhao Wu, Zhenyu He, Xing Xie, Cheng Soon Ong

Figure 1 for Two-Stage Neural Contextual Bandits for Personalised News Recommendation
Figure 2 for Two-Stage Neural Contextual Bandits for Personalised News Recommendation
Figure 3 for Two-Stage Neural Contextual Bandits for Personalised News Recommendation
Figure 4 for Two-Stage Neural Contextual Bandits for Personalised News Recommendation

We consider the problem of personalised news recommendation where each user consumes news in a sequential fashion. Existing personalised news recommendation methods focus on exploiting user interests and ignores exploration in recommendation, which leads to biased feedback loops and hurt recommendation quality in the long term. We build on contextual bandits recommendation strategies which naturally address the exploitation-exploration trade-off. The main challenges are the computational efficiency for exploring the large-scale item space and utilising the deep representations with uncertainty. We propose a two-stage hierarchical topic-news deep contextual bandits framework to efficiently learn user preferences when there are many news items. We use deep learning representations for users and news, and generalise the neural upper confidence bound (UCB) policies to generalised additive UCB and bilinear UCB. Empirical results on a large-scale news recommendation dataset show that our proposed policies are efficient and outperform the baseline bandit policies.

Viaarxiv icon

On Practical Reinforcement Learning: Provable Robustness, Scalability, and Statistical Efficiency

Mar 03, 2022
Thanh Nguyen-Tang

Figure 1 for On Practical Reinforcement Learning: Provable Robustness, Scalability, and Statistical Efficiency
Figure 2 for On Practical Reinforcement Learning: Provable Robustness, Scalability, and Statistical Efficiency
Figure 3 for On Practical Reinforcement Learning: Provable Robustness, Scalability, and Statistical Efficiency
Figure 4 for On Practical Reinforcement Learning: Provable Robustness, Scalability, and Statistical Efficiency

This thesis rigorously studies fundamental reinforcement learning (RL) methods in modern practical considerations, including robust RL, distributional RL, and offline RL with neural function approximation. The thesis first prepares the readers with an overall overview of RL and key technical background in statistics and optimization. In each of the settings, the thesis motivates the problems to be studied, reviews the current literature, provides computationally efficient algorithms with provable efficiency guarantees, and concludes with future research directions. The thesis makes fundamental contributions to the three settings above, both algorithmically, theoretically, and empirically, while staying relevant to practical considerations.

* Ph.D. thesis, 209 pages 
Viaarxiv icon

Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization

Nov 27, 2021
Thanh Nguyen-Tang, Sunil Gupta, A. Tuan Nguyen, Svetha Venkatesh

Figure 1 for Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization
Figure 2 for Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization
Figure 3 for Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization
Figure 4 for Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization

Offline policy learning (OPL) leverages existing data collected a priori for policy optimization without any active exploration. Despite the prevalence and recent interest in this problem, its theoretical and algorithmic foundations in function approximation settings remain under-developed. In this paper, we consider this problem on the axes of distributional shift, optimization, and generalization in offline contextual bandits with neural networks. In particular, we propose a provably efficient offline contextual bandit with neural network function approximation that does not require any functional assumption on the reward. We show that our method provably generalizes over unseen contexts under a milder condition for distributional shift than the existing OPL works. Notably, unlike any other OPL method, our method learns from the offline data in an online manner using stochastic gradient descent, allowing us to leverage the benefits of online learning into an offline setting. Moreover, we show that our method is more computationally efficient and has a better dependence on the effective dimension of the neural network than an online counterpart. Finally, we demonstrate the empirical effectiveness of our method in a range of synthetic and real-world OPL problems.

* A version is published in Offline Reinforcement Learning Workshop at NeurIPS'21 
Viaarxiv icon

Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support

Jul 24, 2021
Hung Tran-The, Sunil Gupta, Thanh Nguyen-Tang, Santu Rana, Svetha Venkatesh

Figure 1 for Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support
Figure 2 for Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support
Figure 3 for Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support
Figure 4 for Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support

We address policy learning with logged data in contextual bandits. Current offline-policy learning algorithms are mostly based on inverse propensity score (IPS) weighting requiring the logging policy to have \emph{full support} i.e. a non-zero probability for any context/action of the evaluation policy. However, many real-world systems do not guarantee such logging policies, especially when the action space is large and many actions have poor or missing rewards. With such \emph{support deficiency}, the offline learning fails to find optimal policies. We propose a novel approach that uses a hybrid of offline learning with online exploration. The online exploration is used to explore unsupported actions in the logged data whilst offline learning is used to exploit supported actions from the logged data avoiding unnecessary explorations. Our approach determines an optimal policy with theoretical guarantees using the minimal number of online explorations. We demonstrate our algorithms' effectiveness empirically on a diverse collection of datasets.

Viaarxiv icon

On Finite-Sample Analysis of Offline Reinforcement Learning with Deep ReLU Networks

Mar 11, 2021
Thanh Nguyen-Tang, Sunil Gupta, Hung Tran-The, Svetha Venkatesh

Figure 1 for On Finite-Sample Analysis of Offline Reinforcement Learning with Deep ReLU Networks

This paper studies the statistical theory of offline reinforcement learning with deep ReLU networks. We consider the off-policy evaluation (OPE) problem where the goal is to estimate the expected discounted reward of a target policy given the logged data generated by unknown behaviour policies. We study a regression-based fitted Q evaluation (FQE) method using deep ReLU networks and characterize a finite-sample bound on the estimation error of this method under mild assumptions. The prior works in OPE with either general function approximation or deep ReLU networks ignore the data-dependent structure in the algorithm, dodging the technical bottleneck of OPE, while requiring a rather restricted regularity assumption. In this work, we overcome these limitations and provide a comprehensive analysis of OPE with deep ReLU networks. In particular, we precisely quantify how the distribution shift of the offline data, the dimension of the input space, and the regularity of the system control the OPE estimation error. Consequently, we provide insights into the interplay between offline reinforcement learning and deep learning.

* 18 pages 
Viaarxiv icon