Alert button
Picture for Karthik Abinav Sankararaman

Karthik Abinav Sankararaman

Alert button

BayesFormer: Transformer with Uncertainty Estimation

Jun 02, 2022
Karthik Abinav Sankararaman, Sinong Wang, Han Fang

Figure 1 for BayesFormer: Transformer with Uncertainty Estimation
Figure 2 for BayesFormer: Transformer with Uncertainty Estimation
Figure 3 for BayesFormer: Transformer with Uncertainty Estimation
Figure 4 for BayesFormer: Transformer with Uncertainty Estimation

Transformer has become ubiquitous due to its dominant performance in various NLP and image processing tasks. However, it lacks understanding of how to generate mathematically grounded uncertainty estimates for transformer architectures. Models equipped with such uncertainty estimates can typically improve predictive performance, make networks robust, avoid over-fitting and used as acquisition function in active learning. In this paper, we introduce BayesFormer, a Transformer model with dropouts designed by Bayesian theory. We proposed a new theoretical framework to extend the approximate variational inference-based dropout to Transformer-based architectures. Through extensive experiments, we validate the proposed architecture in four paradigms and show improvements across the board: language modeling and classification, long-sequence understanding, machine translation and acquisition function for active learning.

Viaarxiv icon

Stochastic Bandits for Multi-platform Budget Optimization in Online Advertising

Mar 25, 2021
Vashist Avadhanula, Riccardo Colini-Baldeschi, Stefano Leonardi, Karthik Abinav Sankararaman, Okke Schrijvers

Figure 1 for Stochastic Bandits for Multi-platform Budget Optimization in Online Advertising
Figure 2 for Stochastic Bandits for Multi-platform Budget Optimization in Online Advertising
Figure 3 for Stochastic Bandits for Multi-platform Budget Optimization in Online Advertising
Figure 4 for Stochastic Bandits for Multi-platform Budget Optimization in Online Advertising

We study the problem of an online advertising system that wants to optimally spend an advertiser's given budget for a campaign across multiple platforms, without knowing the value for showing an ad to the users on those platforms. We model this challenging practical application as a Stochastic Bandits with Knapsacks problem over $T$ rounds of bidding with the set of arms given by the set of distinct bidding $m$-tuples, where $m$ is the number of platforms. We modify the algorithm proposed in Badanidiyuru \emph{et al.,} to extend it to the case of multiple platforms to obtain an algorithm for both the discrete and continuous bid-spaces. Namely, for discrete bid spaces we give an algorithm with regret $O\left(OPT \sqrt {\frac{mn}{B} }+ \sqrt{mn OPT}\right)$, where $OPT$ is the performance of the optimal algorithm that knows the distributions. For continuous bid spaces the regret of our algorithm is $\tilde{O}\left(m^{1/3} \cdot \min\left\{ B^{2/3}, (m T)^{2/3} \right\} \right)$. When restricted to this special-case, this bound improves over Sankararaman and Slivkins in the regime $OPT \ll T$, as is the case in the particular application at hand. Second, we show an $ \Omega\left (\sqrt {m OPT} \right)$ lower bound for the discrete case and an $\Omega\left( m^{1/3} B^{2/3}\right)$ lower bound for the continuous setting, almost matching the upper bounds. Finally, we use a real-world data set from a large internet online advertising company with multiple ad platforms and show that our algorithms outperform common benchmarks and satisfy the required properties warranted in the real-world application.

Viaarxiv icon

Beyond $\log^2(T)$ Regret for Decentralized Bandits in Matching Markets

Mar 12, 2021
Soumya Basu, Karthik Abinav Sankararaman, Abishek Sankararaman

Figure 1 for Beyond $\log^2(T)$ Regret for Decentralized Bandits in Matching Markets
Figure 2 for Beyond $\log^2(T)$ Regret for Decentralized Bandits in Matching Markets
Figure 3 for Beyond $\log^2(T)$ Regret for Decentralized Bandits in Matching Markets
Figure 4 for Beyond $\log^2(T)$ Regret for Decentralized Bandits in Matching Markets

We design decentralized algorithms for regret minimization in the two-sided matching market with one-sided bandit feedback that significantly improves upon the prior works (Liu et al. 2020a, 2020b, Sankararaman et al. 2020). First, for general markets, for any $\varepsilon > 0$, we design an algorithm that achieves a $O(\log^{1+\varepsilon}(T))$ regret to the agent-optimal stable matching, with unknown time horizon $T$, improving upon the $O(\log^{2}(T))$ regret achieved in (Liu et al. 2020b). Second, we provide the optimal $\Theta(\log(T))$ agent-optimal regret for markets satisfying uniqueness consistency -- markets where leaving participants don't alter the original stable matching. Previously, $\Theta(\log(T))$ regret was achievable (Sankararaman et al. 2020, Liu et al. 2020b) in the much restricted serial dictatorship setting, when all arms have the same preference over the agents. We propose a phase-based algorithm, wherein each phase, besides deleting the globally communicated dominated arms the agents locally delete arms with which they collide often. This local deletion is pivotal in breaking deadlocks arising from rank heterogeneity of agents across arms. We further demonstrate the superiority of our algorithm over existing works through simulations.

Viaarxiv icon

Robust Identifiability in Linear Structural Equation Models of Causal Inference

Jul 14, 2020
Karthik Abinav Sankararaman, Anand Louis, Navin Goyal

Figure 1 for Robust Identifiability in Linear Structural Equation Models of Causal Inference
Figure 2 for Robust Identifiability in Linear Structural Equation Models of Causal Inference
Figure 3 for Robust Identifiability in Linear Structural Equation Models of Causal Inference
Figure 4 for Robust Identifiability in Linear Structural Equation Models of Causal Inference

In this work, we consider the problem of robust parameter estimation from observational data in the context of linear structural equation models (LSEMs). LSEMs are a popular and well-studied class of models for inferring causality in the natural and social sciences. One of the main problems related to LSEMs is to recover the model parameters from the observational data. Under various conditions on LSEMs and the model parameters the prior work provides efficient algorithms to recover the parameters. However, these results are often about generic identifiability. In practice, generic identifiability is not sufficient and we need robust identifiability: small changes in the observational data should not affect the parameters by a huge amount. Robust identifiability has received far less attention and remains poorly understood. Sankararaman et al. (2019) recently provided a set of sufficient conditions on parameters under which robust identifiability is feasible. However, a limitation of their work is that their results only apply to a small sub-class of LSEMs, called ``bow-free paths.'' In this work, we significantly extend their work along multiple dimensions. First, for a large and well-studied class of LSEMs, namely ``bow free'' models, we provide a sufficient condition on model parameters under which robust identifiability holds, thereby removing the restriction of paths required by prior work. We then show that this sufficient condition holds with high probability which implies that for a large set of parameters robust identifiability holds and that for such parameters, existing algorithms already achieve robust identifiability. Finally, we validate our results on both simulated and real-world datasets.

Viaarxiv icon

Dominate or Delete: Decentralized Competing Bandits with Uniform Valuation

Jun 26, 2020
Abishek Sankararaman, Soumya Basu, Karthik Abinav Sankararaman

Figure 1 for Dominate or Delete: Decentralized Competing Bandits with Uniform Valuation
Figure 2 for Dominate or Delete: Decentralized Competing Bandits with Uniform Valuation
Figure 3 for Dominate or Delete: Decentralized Competing Bandits with Uniform Valuation
Figure 4 for Dominate or Delete: Decentralized Competing Bandits with Uniform Valuation

We study regret minimization problems in a two-sided matching market where uniformly valued demand side agents (a.k.a. agents) continuously compete for getting matched with supply side agents (a.k.a. arms) with unknown and heterogeneous valuations. Such markets abstract online matching platforms (for e.g. UpWork, TaskRabbit) and falls within the purview of matching bandit models introduced in Liu et al. \cite{matching_bandits}. The uniform valuation in the demand side admits a unique stable matching equilibrium in the system. We design the first decentralized algorithm - \fullname\; (\name), for matching bandits under uniform valuation that does not require any knowledge of reward gaps or time horizon, and thus partially resolves an open question in \cite{matching_bandits}. \name\; works in phases of exponentially increasing length. In each phase $i$, an agent first deletes dominated arms -- the arms preferred by agents ranked higher than itself. Deletion follows dynamic explore-exploit using UCB algorithm on the remaining arms for $2^i$ rounds. {Finally, the preferred arm is broadcast in a decentralized fashion to other agents through {\em pure exploitation} in $(N-1)K$ rounds with $N$ agents and $K$ arms.} Comparing the obtained reward with respect to the unique stable matching, we show that \name\; achieves $O(\log(T)/\Delta^2)$ regret in $T$ rounds, where $\Delta$ is the minimum gap across all agents and arms. We provide a (orderwise) matching regret lower-bound.

Viaarxiv icon

Advances in Bandits with Knapsacks

Feb 01, 2020
Karthik Abinav Sankararaman, Aleksandrs Slivkins

"Bandits with Knapsacks" (\BwK) is a general model for multi-armed bandits under supply/budget constraints. While worst-case regret bounds for \BwK are well-understood, we focus on logarithmic instance-dependent regret bounds. We largely resolve them for one limited resource other than time, and for known, deterministic resource consumption. We also bound regret within a given round ("simple regret"). One crucial technique analyzes the sum of the confidence terms of the chosen arms. This technique allows to import the insights from prior work on bandits without resources, which leads to several extensions.

Viaarxiv icon

Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms During High-Demand Hours

Dec 18, 2019
Vedant Nanda, Pan Xu, Karthik Abinav Sankararaman, John P. Dickerson, Aravind Srinivasan

Figure 1 for Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms During High-Demand Hours
Figure 2 for Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms During High-Demand Hours
Figure 3 for Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms During High-Demand Hours
Figure 4 for Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms During High-Demand Hours

Rideshare platforms, when assigning requests to drivers, tend to maximize profit for the system and/or minimize waiting time for riders. Such platforms can exacerbate biases that drivers may have over certain types of requests. We consider the case of peak hours when the demand for rides is more than the supply of drivers. Drivers are well aware of their advantage during the peak hours and can choose to be selective about which rides to accept. Moreover, if in such a scenario, the assignment of requests to drivers (by the platform) is made only to maximize profit and/or minimize wait time for riders, requests of a certain type (e.g. from a non-popular pickup location, or to a non-popular drop-off location) might never be assigned to a driver. Such a system can be highly unfair to riders. However, increasing fairness might come at a cost of the overall profit made by the rideshare platform. To balance these conflicting goals, we present a flexible, non-adaptive algorithm, \lpalg, that allows the platform designer to control the profit and fairness of the system via parameters $\alpha$ and $\beta$ respectively. We model the matching problem as an online bipartite matching where the set of drivers is offline and requests arrive online. Upon the arrival of a request, we use \lpalg to assign it to a driver (the driver might then choose to accept or reject it) or reject the request. We formalize the measures of profit and fairness in our setting and show that by using \lpalg, the competitive ratios for profit and fairness measures would be no worse than $\alpha/e$ and $\beta/e$ respectively. Extensive experimental results on both real-world and synthetic datasets confirm the validity of our theoretical lower bounds. Additionally, they show that $\lpalg$ under some choice of $(\alpha, \beta)$ can beat two natural heuristics, Greedy and Uniform, on \emph{both} fairness and profit.

* 8 pages, 4 figures, Accepted at AAAI 2020 & AIES (Oral) 2020 
Viaarxiv icon

Mix and Match: Markov Chains & Mixing Times for Matching in Rideshare

Nov 30, 2019
Michael J. Curry, John P. Dickerson, Karthik Abinav Sankararaman, Aravind Srinivasan, Yuhao Wan, Pan Xu

Figure 1 for Mix and Match: Markov Chains & Mixing Times for Matching in Rideshare
Figure 2 for Mix and Match: Markov Chains & Mixing Times for Matching in Rideshare
Figure 3 for Mix and Match: Markov Chains & Mixing Times for Matching in Rideshare
Figure 4 for Mix and Match: Markov Chains & Mixing Times for Matching in Rideshare

Rideshare platforms such as Uber and Lyft dynamically dispatch drivers to match riders' requests. We model the dispatching process in rideshare as a Markov chain that takes into account the geographic mobility of both drivers and riders over time. Prior work explores dispatch policies in the limit of such Markov chains; we characterize when this limit assumption is valid, under a variety of natural dispatch policies. We give explicit bounds on convergence in general, and exact (including constants) convergence rates for special cases. Then, on simulated and real transit data, we show that our bounds characterize convergence rates -- even when the necessary theoretical assumptions are relaxed. Additionally these policies compare well against a standard reinforcement learning algorithm which optimizes for profit without any convergence properties.

Viaarxiv icon

Stability of Linear Structural Equation Models of Causal Inference

May 16, 2019
Karthik Abinav Sankararaman, Anand Louis, Navin Goyal

Figure 1 for Stability of Linear Structural Equation Models of Causal Inference
Figure 2 for Stability of Linear Structural Equation Models of Causal Inference
Figure 3 for Stability of Linear Structural Equation Models of Causal Inference
Figure 4 for Stability of Linear Structural Equation Models of Causal Inference

We consider the numerical stability of the parameter recovery problem in Linear Structural Equation Model ($\LSEM$) of causal inference. A long line of work starting from Wright (1920) has focused on understanding which sub-classes of $\LSEM$ allow for efficient parameter recovery. Despite decades of study, this question is not yet fully resolved. The goal of this paper is complementary to this line of work; we want to understand the stability of the recovery problem in the cases when efficient recovery is possible. Numerical stability of Pearl's notion of causality was first studied in Schulman and Srivastava (2016) using the concept of condition number where they provide ill-conditioned examples. In this work, we provide a condition number analysis for the $\LSEM$. First we prove that under a sufficient condition, for a certain sub-class of $\LSEM$ that are \emph{bow-free} (Brito and Pearl (2002)), the parameter recovery is stable. We further prove that \emph{randomly} chosen input parameters for this family satisfy the condition with a substantial probability. Hence for this family, on a large subset of parameter space, recovery is numerically stable. Next we construct an example of $\LSEM$ on four vertices with \emph{unbounded} condition number. We then corroborate our theoretical findings via simulations as well as real-world experiments for a sociology application. Finally, we provide a general heuristic for estimating the condition number of any $\LSEM$ instance.

* To appear in UAI 2019 
Viaarxiv icon