Alert button
Picture for Dileep Kalathil

Dileep Kalathil

Alert button

Department of Electrical and Computer Engineering, Texas A&M University

Transformers are Efficient In-Context Estimators for Wireless Communication

Nov 01, 2023
Vicram Rajagopalan, Vishnu Teja Kunde, Chandra Shekhara Kaushik Valmeekam, Krishna Narayanan, Srinivas Shakkottai, Dileep Kalathil, Jean-Francois Chamberland

Pre-trained transformers can perform in-context learning, where they adapt to a new task using only a small number of prompts without any explicit model optimization. Inspired by this attribute, we propose a novel approach, called in-context estimation, for the canonical communication problem of estimating transmitted symbols from received symbols. A communication channel is essentially a noisy function that maps transmitted symbols to received symbols, and this function can be represented by an unknown parameter whose statistics depend on an (also unknown) latent context. Conventional approaches ignore this hierarchical structure and simply attempt to use known transmissions, called pilots, to perform a least-squares estimate of the channel parameter, which is then used to estimate successive, unknown transmitted symbols. We make the basic connection that transformers show excellent contextual sequence completion with a few prompts, and so they should be able to implicitly determine the latent context from pilot symbols to perform end-to-end in-context estimation of transmitted symbols. Furthermore, the transformer should use information efficiently, i.e., it should utilize any pilots received to attain the best possible symbol estimates. Through extensive simulations, we show that in-context estimation not only significantly outperforms standard approaches, but also achieves the same performance as an estimator with perfect knowledge of the latent context within a few context examples. Thus, we make a strong case that transformers are efficient in-context estimators in the communication setting.

* 10 pages, 4 figures, 2 tables, preprint 
Viaarxiv icon

Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage

Oct 27, 2023
Kishan Panaganti, Zaiyan Xu, Dileep Kalathil, Mohammad Ghavamzadeh

The goal of an offline reinforcement learning (RL) algorithm is to learn optimal polices using historical (offline) data, without access to the environment for online exploration. One of the main challenges in offline RL is the distribution shift which refers to the difference between the state-action visitation distribution of the data generating policy and the learning policy. Many recent works have used the idea of pessimism for developing offline RL algorithms and characterizing their sample complexity under a relatively weak assumption of single policy concentrability. Different from the offline RL literature, the area of distributionally robust learning (DRL) offers a principled framework that uses a minimax formulation to tackle model mismatch between training and testing environments. In this work, we aim to bridge these two areas by showing that the DRL approach can be used to tackle the distributional shift problem in offline RL. In particular, we propose two offline RL algorithms using the DRL framework, for the tabular and linear function approximation settings, and characterize their sample complexity under the single policy concentrability assumption. We also demonstrate the superior performance our proposed algorithm through simulation experiments.

* 33 pages, preprint 
Viaarxiv icon

Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation

Jul 17, 2023
Ruida Zhou, Tao Liu, Min Cheng, Dileep Kalathil, P. R. Kumar, Chao Tian

Figure 1 for Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation
Figure 2 for Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation
Figure 3 for Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation
Figure 4 for Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation

We study robust reinforcement learning (RL) with the goal of determining a well-performing policy that is robust against model mismatch between the training simulator and the testing environment. Previous policy-based robust RL algorithms mainly focus on the tabular setting under uncertainty sets that facilitate robust policy evaluation, but are no longer tractable when the number of states scales up. To this end, we propose two novel uncertainty set formulations, one based on double sampling and the other on an integral probability metric. Both make large-scale robust RL tractable even when one only has access to a simulator. We propose a robust natural actor-critic (RNAC) approach that incorporates the new uncertainty sets and employs function approximation. We provide finite-time convergence guarantees for the proposed RNAC algorithm to the optimal robust policy within the function approximation error. Finally, we demonstrate the robust performance of the policy learned by our proposed RNAC approach in multiple MuJoCo environments and a real-world TurtleBot navigation task.

Viaarxiv icon

LLMZip: Lossless Text Compression using Large Language Models

Jun 26, 2023
Chandra Shekhara Kaushik Valmeekam, Krishna Narayanan, Dileep Kalathil, Jean-Francois Chamberland, Srinivas Shakkottai

Figure 1 for LLMZip: Lossless Text Compression using Large Language Models
Figure 2 for LLMZip: Lossless Text Compression using Large Language Models
Figure 3 for LLMZip: Lossless Text Compression using Large Language Models
Figure 4 for LLMZip: Lossless Text Compression using Large Language Models

We provide new estimates of an asymptotic upper bound on the entropy of English using the large language model LLaMA-7B as a predictor for the next token given a window of past tokens. This estimate is significantly smaller than currently available estimates in \cite{cover1978convergent}, \cite{lutati2023focus}. A natural byproduct is an algorithm for lossless compression of English text which combines the prediction from the large language model with a lossless compression scheme. Preliminary results from limited experiments suggest that our scheme outperforms state-of-the-art text compression schemes such as BSC, ZPAQ, and paq8h.

* 7 pages, 4 figures, 4 tables, preprint, added results on using LLMs with arithmetic coding 
Viaarxiv icon

Federated Ensemble-Directed Offline Reinforcement Learning

May 04, 2023
Desik Rengarajan, Nitin Ragothaman, Dileep Kalathil, Srinivas Shakkottai

Figure 1 for Federated Ensemble-Directed Offline Reinforcement Learning
Figure 2 for Federated Ensemble-Directed Offline Reinforcement Learning
Figure 3 for Federated Ensemble-Directed Offline Reinforcement Learning
Figure 4 for Federated Ensemble-Directed Offline Reinforcement Learning

We consider the problem of federated offline reinforcement learning (RL), a scenario under which distributed learning agents must collaboratively learn a high-quality control policy only using small pre-collected datasets generated according to different unknown behavior policies. Naively combining a standard offline RL approach with a standard federated learning approach to solve this problem can lead to poorly performing policies. In response, we develop the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA), which distills the collective wisdom of the clients using an ensemble learning approach. We develop the FEDORA codebase to utilize distributed compute resources on a federated learning platform. We show that FEDORA significantly outperforms other approaches, including offline RL over the combined data pool, in various complex continuous control environments and real world datasets. Finally, we demonstrate the performance of FEDORA in the real-world on a mobile robot.

Viaarxiv icon

Improved Sample Complexity Bounds for Distributionally Robust Reinforcement Learning

Mar 05, 2023
Zaiyan Xu, Kishan Panaganti, Dileep Kalathil

Figure 1 for Improved Sample Complexity Bounds for Distributionally Robust Reinforcement Learning
Figure 2 for Improved Sample Complexity Bounds for Distributionally Robust Reinforcement Learning
Figure 3 for Improved Sample Complexity Bounds for Distributionally Robust Reinforcement Learning

We consider the problem of learning a control policy that is robust against the parameter mismatches between the training environment and testing environment. We formulate this as a distributionally robust reinforcement learning (DR-RL) problem where the objective is to learn the policy which maximizes the value function against the worst possible stochastic model of the environment in an uncertainty set. We focus on the tabular episodic learning setting where the algorithm has access to a generative model of the nominal (training) environment around which the uncertainty set is defined. We propose the Robust Phased Value Learning (RPVL) algorithm to solve this problem for the uncertainty sets specified by four different divergences: total variation, chi-square, Kullback-Leibler, and Wasserstein. We show that our algorithm achieves $\tilde{\mathcal{O}}(|\mathcal{S}||\mathcal{A}| H^{5})$ sample complexity, which is uniformly better than the existing results by a factor of $|\mathcal{S}|$, where $|\mathcal{S}|$ is number of states, $|\mathcal{A}|$ is the number of actions, and $H$ is the horizon length. We also provide the first-ever sample complexity result for the Wasserstein uncertainty set. Finally, we demonstrate the performance of our algorithm using simulation experiments.

* Appeared in The 26th (2023) International Conference on Artificial Intelligence and Statistics (AISTATS) 
Viaarxiv icon

Dynamic Regret Analysis of Safe Distributed Online Optimization for Convex and Non-convex Problems

Feb 23, 2023
Ting-Jui Chang, Sapana Chaudhary, Dileep Kalathil, Shahin Shahrampour

Figure 1 for Dynamic Regret Analysis of Safe Distributed Online Optimization for Convex and Non-convex Problems

This paper addresses safe distributed online optimization over an unknown set of linear safety constraints. A network of agents aims at jointly minimizing a global, time-varying function, which is only partially observable to each individual agent. Therefore, agents must engage in local communications to generate a safe sequence of actions competitive with the best minimizer sequence in hindsight, and the gap between the two sequences is quantified via dynamic regret. We propose distributed safe online gradient descent (D-Safe-OGD) with an exploration phase, where all agents estimate the constraint parameters collaboratively to build estimated feasible sets, ensuring the action selection safety during the optimization phase. We prove that for convex functions, D-Safe-OGD achieves a dynamic regret bound of $O(T^{2/3} \sqrt{\log T} + T^{1/3}C_T^*)$, where $C_T^*$ denotes the path-length of the best minimizer sequence. We further prove a dynamic regret bound of $O(T^{2/3} \sqrt{\log T} + T^{2/3}C_T^*)$ for certain non-convex problems, which establishes the first dynamic regret bound for a safe distributed algorithm in the non-convex setting.

Viaarxiv icon

Enhanced Meta Reinforcement Learning using Demonstrations in Sparse Reward Environments

Sep 26, 2022
Desik Rengarajan, Sapana Chaudhary, Jaewon Kim, Dileep Kalathil, Srinivas Shakkottai

Figure 1 for Enhanced Meta Reinforcement Learning using Demonstrations in Sparse Reward Environments
Figure 2 for Enhanced Meta Reinforcement Learning using Demonstrations in Sparse Reward Environments
Figure 3 for Enhanced Meta Reinforcement Learning using Demonstrations in Sparse Reward Environments
Figure 4 for Enhanced Meta Reinforcement Learning using Demonstrations in Sparse Reward Environments

Meta reinforcement learning (Meta-RL) is an approach wherein the experience gained from solving a variety of tasks is distilled into a meta-policy. The meta-policy, when adapted over only a small (or just a single) number of steps, is able to perform near-optimally on a new, related task. However, a major challenge to adopting this approach to solve real-world problems is that they are often associated with sparse reward functions that only indicate whether a task is completed partially or fully. We consider the situation where some data, possibly generated by a sub-optimal agent, is available for each task. We then develop a class of algorithms entitled Enhanced Meta-RL using Demonstrations (EMRLD) that exploit this information even if sub-optimal to obtain guidance during training. We show how EMRLD jointly utilizes RL and supervised learning over the offline data to generate a meta-policy that demonstrates monotone performance improvements. We also develop a warm started variant called EMRLD-WS that is particularly efficient for sub-optimal demonstration data. Finally, we show that our EMRLD algorithms significantly outperform existing approaches in a variety of sparse reward environments, including that of a mobile robot.

* Accepted to NeurIPS 2022; first two authors contributed equally 
Viaarxiv icon

Robust Reinforcement Learning Algorithm for Vision-based Ship Landing of UAVs

Sep 17, 2022
Vishnu Saj, Bochan Lee, Dileep Kalathil, Moble Benedict

Figure 1 for Robust Reinforcement Learning Algorithm for Vision-based Ship Landing of UAVs
Figure 2 for Robust Reinforcement Learning Algorithm for Vision-based Ship Landing of UAVs
Figure 3 for Robust Reinforcement Learning Algorithm for Vision-based Ship Landing of UAVs
Figure 4 for Robust Reinforcement Learning Algorithm for Vision-based Ship Landing of UAVs

This paper addresses the problem of developing an algorithm for autonomous ship landing of vertical take-off and landing (VTOL) capable unmanned aerial vehicles (UAVs), using only a monocular camera in the UAV for tracking and localization. Ship landing is a challenging task due to the small landing space, six degrees of freedom ship deck motion, limited visual references for localization, and adversarial environmental conditions such as wind gusts. We first develop a computer vision algorithm which estimates the relative position of the UAV with respect to a horizon reference bar on the landing platform using the image stream from a monocular vision camera on the UAV. Our approach is motivated by the actual ship landing procedure followed by the Navy helicopter pilots in tracking the horizon reference bar as a visual cue. We then develop a robust reinforcement learning (RL) algorithm for controlling the UAV towards the landing platform even in the presence of adversarial environmental conditions such as wind gusts. We demonstrate the superior performance of our algorithm compared to a benchmark nonlinear PID control approach, both in the simulation experiments using the Gazebo environment and in the real-world setting using a Parrot ANAFI quad-rotor and sub-scale ship platform undergoing 6 degrees of freedom (DOF) deck motion.

Viaarxiv icon