Alert button
Picture for Svetha Venkatesh

Svetha Venkatesh

Alert button

LaGR-SEQ: Language-Guided Reinforcement Learning with Sample-Efficient Querying

Aug 21, 2023
Thommen George Karimpanal, Laknath Buddhika Semage, Santu Rana, Hung Le, Truyen Tran, Sunil Gupta, Svetha Venkatesh

Figure 1 for LaGR-SEQ: Language-Guided Reinforcement Learning with Sample-Efficient Querying
Figure 2 for LaGR-SEQ: Language-Guided Reinforcement Learning with Sample-Efficient Querying
Figure 3 for LaGR-SEQ: Language-Guided Reinforcement Learning with Sample-Efficient Querying
Figure 4 for LaGR-SEQ: Language-Guided Reinforcement Learning with Sample-Efficient Querying

Large language models (LLMs) have recently demonstrated their impressive ability to provide context-aware responses via text. This ability could potentially be used to predict plausible solutions in sequential decision making tasks pertaining to pattern completion. For example, by observing a partial stack of cubes, LLMs can predict the correct sequence in which the remaining cubes should be stacked by extrapolating the observed patterns (e.g., cube sizes, colors or other attributes) in the partial stack. In this work, we introduce LaGR (Language-Guided Reinforcement learning), which uses this predictive ability of LLMs to propose solutions to tasks that have been partially completed by a primary reinforcement learning (RL) agent, in order to subsequently guide the latter's training. However, as RL training is generally not sample-efficient, deploying this approach would inherently imply that the LLM be repeatedly queried for solutions; a process that can be expensive and infeasible. To address this issue, we introduce SEQ (sample efficient querying), where we simultaneously train a secondary RL agent to decide when the LLM should be queried for solutions. Specifically, we use the quality of the solutions emanating from the LLM as the reward to train this agent. We show that our proposed framework LaGR-SEQ enables more efficient primary RL training, while simultaneously minimizing the number of queries to the LLM. We demonstrate our approach on a series of tasks and highlight the advantages of our approach, along with its limitations and potential future research directions.

* 18 pages, 11 figures 
Viaarxiv icon

Intrinsic Motivation via Surprise Memory

Aug 09, 2023
Hung Le, Kien Do, Dung Nguyen, Svetha Venkatesh

Figure 1 for Intrinsic Motivation via Surprise Memory
Figure 2 for Intrinsic Motivation via Surprise Memory
Figure 3 for Intrinsic Motivation via Surprise Memory
Figure 4 for Intrinsic Motivation via Surprise Memory

We present a new computing model for intrinsic rewards in reinforcement learning that addresses the limitations of existing surprise-driven explorations. The reward is the novelty of the surprise rather than the surprise norm. We estimate the surprise novelty as retrieval errors of a memory network wherein the memory stores and reconstructs surprises. Our surprise memory (SM) augments the capability of surprise-based intrinsic motivators, maintaining the agent's interest in exciting exploration while reducing unwanted attraction to unpredictable or noisy observations. Our experiments demonstrate that the SM combined with various surprise predictors exhibits efficient exploring behaviors and significantly boosts the final performance in sparse reward environments, including Noisy-TV, navigation and challenging Atari games.

* Preprint 
Viaarxiv icon

Predictive Modeling through Hyper-Bayesian Optimization

Aug 01, 2023
Manisha Senadeera, Santu Rana, Sunil Gupta, Svetha Venkatesh

Figure 1 for Predictive Modeling through Hyper-Bayesian Optimization
Figure 2 for Predictive Modeling through Hyper-Bayesian Optimization
Figure 3 for Predictive Modeling through Hyper-Bayesian Optimization
Figure 4 for Predictive Modeling through Hyper-Bayesian Optimization

Model selection is an integral problem of model based optimization techniques such as Bayesian optimization (BO). Current approaches often treat model selection as an estimation problem, to be periodically updated with observations coming from the optimization iterations. In this paper, we propose an alternative way to achieve both efficiently. Specifically, we propose a novel way of integrating model selection and BO for the single goal of reaching the function optima faster. The algorithm moves back and forth between BO in the model space and BO in the function space, where the goodness of the recommended model is captured by a score function and fed back, capturing how well the model helped convergence in the function space. The score function is derived in such a way that it neutralizes the effect of the moving nature of the BO in the function space, thus keeping the model selection problem stationary. This back and forth leads to quick convergence for both model selection and BO in the function space. In addition to improved sample efficiency, the framework outputs information about the black-box function. Convergence is proved, and experimental results show significant improvement compared to standard BO.

Viaarxiv icon

Persistent-Transient Duality: A Multi-mechanism Approach for Modeling Human-Object Interaction

Jul 24, 2023
Hung Tran, Vuong Le, Svetha Venkatesh, Truyen Tran

Figure 1 for Persistent-Transient Duality: A Multi-mechanism Approach for Modeling Human-Object Interaction
Figure 2 for Persistent-Transient Duality: A Multi-mechanism Approach for Modeling Human-Object Interaction
Figure 3 for Persistent-Transient Duality: A Multi-mechanism Approach for Modeling Human-Object Interaction
Figure 4 for Persistent-Transient Duality: A Multi-mechanism Approach for Modeling Human-Object Interaction

Humans are highly adaptable, swiftly switching between different modes to progressively handle different tasks, situations and contexts. In Human-object interaction (HOI) activities, these modes can be attributed to two mechanisms: (1) the large-scale consistent plan for the whole activity and (2) the small-scale children interactive actions that start and end along the timeline. While neuroscience and cognitive science have confirmed this multi-mechanism nature of human behavior, machine modeling approaches for human motion are trailing behind. While attempted to use gradually morphing structures (e.g., graph attention networks) to model the dynamic HOI patterns, they miss the expeditious and discrete mode-switching nature of the human motion. To bridge that gap, this work proposes to model two concurrent mechanisms that jointly control human motion: the Persistent process that runs continually on the global scale, and the Transient sub-processes that operate intermittently on the local context of the human while interacting with objects. These two mechanisms form an interactive Persistent-Transient Duality that synergistically governs the activity sequences. We model this conceptual duality by a parent-child neural network of Persistent and Transient channels with a dedicated neural module for dynamic mechanism switching. The framework is trialed on HOI motion forecasting. On two rich datasets and a wide variety of settings, the model consistently delivers superior performances, proving its suitability for the challenge.

* Accepted at ICCV 2023 
Viaarxiv icon

BO-Muse: A human expert and AI teaming framework for accelerated experimental design

Mar 03, 2023
Sunil Gupta, Alistair Shilton, Arun Kumar A V, Shannon Ryan, Majid Abdolshah, Hung Le, Santu Rana, Julian Berk, Mahad Rashid, Svetha Venkatesh

Figure 1 for BO-Muse: A human expert and AI teaming framework for accelerated experimental design
Figure 2 for BO-Muse: A human expert and AI teaming framework for accelerated experimental design
Figure 3 for BO-Muse: A human expert and AI teaming framework for accelerated experimental design
Figure 4 for BO-Muse: A human expert and AI teaming framework for accelerated experimental design

In this paper we introduce BO-Muse, a new approach to human-AI teaming for the optimization of expensive black-box functions. Inspired by the intrinsic difficulty of extracting expert knowledge and distilling it back into AI models and by observations of human behaviour in real-world experimental design, our algorithm lets the human expert take the lead in the experimental process. The human expert can use their domain expertise to its full potential, while the AI plays the role of a muse, injecting novelty and searching for areas of weakness to break the human out of over-exploitation induced by cognitive entrenchment. With mild assumptions, we show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone. We validate our algorithm using synthetic data and with human experts performing real-world experiments.

* 34 Pages, 7 Figures and 5 Tables 
Viaarxiv icon

Zero-shot Sim2Real Adaptation Across Environments

Feb 08, 2023
Buddhika Laknath Semage, Thommen George Karimpanal, Santu Rana, Svetha Venkatesh

Figure 1 for Zero-shot Sim2Real Adaptation Across Environments
Figure 2 for Zero-shot Sim2Real Adaptation Across Environments
Figure 3 for Zero-shot Sim2Real Adaptation Across Environments
Figure 4 for Zero-shot Sim2Real Adaptation Across Environments

Simulation based learning often provides a cost-efficient recourse to reinforcement learning applications in robotics. However, simulators are generally incapable of accurately replicating real-world dynamics, and thus bridging the sim2real gap is an important problem in simulation based learning. Current solutions to bridge the sim2real gap involve hybrid simulators that are augmented with neural residual models. Unfortunately, they require a separate residual model for each individual environment configuration (i.e., a fixed setting of environment variables such as mass, friction etc.), and thus are not transferable to new environments quickly. To address this issue, we propose a Reverse Action Transformation (RAT) policy which learns to imitate simulated policies in the real-world. Once learnt from a single environment, RAT can then be deployed on top of a Universal Policy Network to achieve zero-shot adaptation to new environments. We empirically evaluate our approach in a set of continuous control tasks and observe its advantage as a few-shot and zero-shot learner over competing baselines.

Viaarxiv icon

Gradient Descent in Neural Networks as Sequential Learning in RKBS

Feb 01, 2023
Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh

Figure 1 for Gradient Descent in Neural Networks as Sequential Learning in RKBS
Figure 2 for Gradient Descent in Neural Networks as Sequential Learning in RKBS
Figure 3 for Gradient Descent in Neural Networks as Sequential Learning in RKBS
Figure 4 for Gradient Descent in Neural Networks as Sequential Learning in RKBS

The study of Neural Tangent Kernels (NTKs) has provided much needed insight into convergence and generalization properties of neural networks in the over-parametrized (wide) limit by approximating the network using a first-order Taylor expansion with respect to its weights in the neighborhood of their initialization values. This allows neural network training to be analyzed from the perspective of reproducing kernel Hilbert spaces (RKHS), which is informative in the over-parametrized regime, but a poor approximation for narrower networks as the weights change more during training. Our goal is to extend beyond the limits of NTK toward a more general theory. We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights as an inner product of two feature maps, respectively from data and weight-step space, to feature space, allowing neural network training to be analyzed from the perspective of reproducing kernel {\em Banach} space (RKBS). We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning in RKBS. Using this, we present novel bound on uniform convergence where the iterations count and learning rate play a central role, giving new theoretical insight into neural network training.

Viaarxiv icon

Memory-Augmented Theory of Mind Network

Jan 17, 2023
Dung Nguyen, Phuoc Nguyen, Hung Le, Kien Do, Svetha Venkatesh, Truyen Tran

Figure 1 for Memory-Augmented Theory of Mind Network
Figure 2 for Memory-Augmented Theory of Mind Network
Figure 3 for Memory-Augmented Theory of Mind Network
Figure 4 for Memory-Augmented Theory of Mind Network

Social reasoning necessitates the capacity of theory of mind (ToM), the ability to contextualise and attribute mental states to others without having access to their internal cognitive structure. Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents and infer their beliefs (including false beliefs about things that no longer exist), goals, intentions and future actions. The challenges arise when the behavioural space is complex, demanding skilful space navigation for rapidly changing contexts for an extended period. We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others. The memories allow rapid, selective querying of distal related past behaviours of others to deliberatively reason about their current mental state, beliefs and future behaviours. This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes. We also construct a new suite of experiments to demonstrate that memories facilitate the learning process and achieve better theory of mind performance, especially for high-demand false-belief tasks that require inferring through multiple steps of changes.

* Accepted for publication at AAAI 2023 
Viaarxiv icon

On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation

Nov 23, 2022
Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora

Figure 1 for On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
Figure 2 for On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
Figure 3 for On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
Figure 4 for On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation

Sample-efficient offline reinforcement learning (RL) with linear function approximation has recently been studied extensively. Much of prior work has yielded the minimax-optimal bound of $\tilde{\mathcal{O}}(\frac{1}{\sqrt{K}})$, with $K$ being the number of episodes in the offline data. In this work, we seek to understand instance-dependent bounds for offline RL with function approximation. We present an algorithm called Bootstrapped and Constrained Pessimistic Value Iteration (BCP-VI), which leverages data bootstrapping and constrained optimization on top of pessimism. We show that under a partial data coverage assumption, that of \emph{concentrability} with respect to an optimal policy, the proposed algorithm yields a fast rate of $\tilde{\mathcal{O}}(\frac{1}{K})$ for offline RL when there is a positive gap in the optimal Q-value functions, even when the offline data were adaptively collected. Moreover, when the linear features of the optimal actions in the states reachable by an optimal policy span those reachable by the behavior policy and the optimal actions are unique, offline RL achieves absolute zero sub-optimality error when $K$ exceeds a (finite) instance-dependent threshold. To the best of our knowledge, these are the first $\tilde{\mathcal{O}}(\frac{1}{K})$ bound and absolute zero sub-optimality bound respectively for offline RL with linear function approximation from adaptive data with partial coverage. We also provide instance-agnostic and instance-dependent information-theoretical lower bounds to complement our upper bounds.

* AAAI'23 
Viaarxiv icon

Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation

Sep 21, 2022
Kien Do, Hung Le, Dung Nguyen, Dang Nguyen, Haripriya Harikumar, Truyen Tran, Santu Rana, Svetha Venkatesh

Figure 1 for Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Figure 2 for Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Figure 3 for Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Figure 4 for Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation

Data-free Knowledge Distillation (DFKD) has attracted attention recently thanks to its appealing capability of transferring knowledge from a teacher network to a student network without using training data. The main idea is to use a generator to synthesize data for training the student. As the generator gets updated, the distribution of synthetic data will change. Such distribution shift could be large if the generator and the student are trained adversarially, causing the student to forget the knowledge it acquired at previous steps. To alleviate this problem, we propose a simple yet effective method called Momentum Adversarial Distillation (MAD) which maintains an exponential moving average (EMA) copy of the generator and uses synthetic samples from both the generator and the EMA generator to train the student. Since the EMA generator can be considered as an ensemble of the generator's old versions and often undergoes a smaller change in updates compared to the generator, training on its synthetic samples can help the student recall the past knowledge and prevent the student from adapting too quickly to new updates of the generator. Our experiments on six benchmark datasets including big datasets like ImageNet and Places365 demonstrate the superior performance of MAD over competing methods for handling the large distribution shift problem. Our method also compares favorably to existing DFKD methods and even achieves state-of-the-art results in some cases.

* Accepted to NeurIPS 2022 
Viaarxiv icon