We study the problem of repeated play in a zero-sum game in which the payoff matrix may change, in a possibly adversarial fashion, on each round; we call these Online Matrix Games. Finding the Nash Equilibrium (NE) of a two player zero-sum game is core to many problems in statistics, optimization, and economics, and for a fixed game matrix this can be easily reduced to solving a linear program. But when the payoff matrix evolves over time our goal is to find a sequential algorithm that can compete with, in a certain sense, the NE of the long-term-averaged payoff matrix. We design an algorithm with small NE regret--that is, we ensure that the long-term payoff of both players is close to minimax optimum in hindsight. Our algorithm achieves near-optimal dependence with respect to the number of rounds and depends poly-logarithmically on the number of available actions of the players. Additionally, we show that the naive reduction, where each player simply minimizes its own regret, fails to achieve the stated objective regardless of which algorithm is used. We also consider the so-called bandit setting, where the feedback is significantly limited, and we provide an algorithm with small NE regret using one-point estimates of each payoff matrix.
We consider Markov Decision Processes (MDPs) where the rewards are unknown and may change in an adversarial manner. We provide an algorithm that achieves state-of-the-art regret bound of $O( \sqrt{\tau (\ln|S|+\ln|A|)T}\ln(T))$, where $S$ is the state space, $A$ is the action space, $\tau$ is the mixing time of the MDP, and $T$ is the number of periods. The algorithm's computational complexity is polynomial in $|S|$ and $|A|$ per period. We then consider a setting often encountered in practice, where the state space of the MDP is too large to allow for exact solutions. By approximating the state-action occupancy measures with a linear architecture of dimension $d\ll|S|$, we propose a modified algorithm with computational complexity polynomial in $d$. We also prove a regret bound for this modified algorithm, which to the best of our knowledge this is the first $\tilde{O}(\sqrt{T})$ regret bound for large scale MDPs with changing rewards.
Motivated by applications in clinical trials and finance, we study the problem of online convex optimization (with bandit feedback) where the decision maker is risk-averse. We provide two algorithms to solve this problem. The first one is a descent-type algorithm which is easy to implement. The second algorithm, which combines the ellipsoid method and a center point device, achieves (almost) optimal regret bounds with respect to the number of rounds. To the best of our knowledge this is the first attempt to address risk-aversion in the online convex bandit problem.
In this paper we develop the first algorithms for online submodular minimization that preserve differential privacy under full information feedback and bandit feedback. A sequence of $T$ submodular functions over a collection of $n$ elements arrive online, and at each timestep the algorithm must choose a subset of $[n]$ before seeing the function. The algorithm incurs a cost equal to the function evaluated on the chosen set, and seeks to choose a sequence of sets that achieves low expected regret. Our first result is in the full information setting, where the algorithm can observe the entire function after making its decision at each timestep. We give an algorithm in this setting that is $\epsilon$-differentially private and achieves expected regret $\tilde{O}\left(\frac{n^{3/2}\sqrt{T}}{\epsilon}\right)$. This algorithm works by relaxing submodular function to a convex function using the Lovasz extension, and then simulating an algorithm for differentially private online convex optimization. Our second result is in the bandit setting, where the algorithm can only see the cost incurred by its chosen set, and does not have access to the entire function. This setting is significantly more challenging because the algorithm does not receive enough information to compute the Lovasz extension or its subgradients. Instead, we construct an unbiased estimate using a single-point estimation, and then simulate private online convex optimization using this estimate. Our algorithm using bandit feedback is $\epsilon$-differentially private and achieves expected regret $\tilde{O}\left(\frac{n^{3/2}T^{3/4}}{\epsilon}\right)$.