Alert button
Picture for Mohammad Hajiesmaili

Mohammad Hajiesmaili

Alert button

Cooperative Multi-agent Bandits: Distributed Algorithms with Optimal Individual Regret and Constant Communication Costs

Aug 08, 2023
Lin Yang, Xuchuang Wang, Mohammad Hajiesmaili, Lijun Zhang, John C. S. Lui, Don Towsley

Figure 1 for Cooperative Multi-agent Bandits: Distributed Algorithms with Optimal Individual Regret and Constant Communication Costs
Figure 2 for Cooperative Multi-agent Bandits: Distributed Algorithms with Optimal Individual Regret and Constant Communication Costs
Figure 3 for Cooperative Multi-agent Bandits: Distributed Algorithms with Optimal Individual Regret and Constant Communication Costs

Recently, there has been extensive study of cooperative multi-agent multi-armed bandits where a set of distributed agents cooperatively play the same multi-armed bandit game. The goal is to develop bandit algorithms with the optimal group and individual regrets and low communication between agents. The prior work tackled this problem using two paradigms: leader-follower and fully distributed algorithms. Prior algorithms in both paradigms achieve the optimal group regret. The leader-follower algorithms achieve constant communication costs but fail to achieve optimal individual regrets. The state-of-the-art fully distributed algorithms achieve optimal individual regrets but fail to achieve constant communication costs. This paper presents a simple yet effective communication policy and integrates it into a learning algorithm for cooperative bandits. Our algorithm achieves the best of both paradigms: optimal individual regret and constant communication costs.

Viaarxiv icon

Adversarial Attacks on Online Learning to Rank with Click Feedback

May 26, 2023
Jinhang Zuo, Zhiyao Zhang, Zhiyong Wang, Shuai Li, Mohammad Hajiesmaili, Adam Wierman

Figure 1 for Adversarial Attacks on Online Learning to Rank with Click Feedback
Figure 2 for Adversarial Attacks on Online Learning to Rank with Click Feedback
Figure 3 for Adversarial Attacks on Online Learning to Rank with Click Feedback
Figure 4 for Adversarial Attacks on Online Learning to Rank with Click Feedback

Online learning to rank (OLTR) is a sequential decision-making problem where a learning agent selects an ordered list of items and receives feedback through user clicks. Although potential attacks against OLTR algorithms may cause serious losses in real-world applications, little is known about adversarial attacks on OLTR. This paper studies attack strategies against multiple variants of OLTR. Our first result provides an attack strategy against the UCB algorithm on classical stochastic bandits with binary feedback, which solves the key issues caused by bounded and discrete feedback that previous works can not handle. Building on this result, we design attack algorithms against UCB-based OLTR algorithms in position-based and cascade models. Finally, we propose a general attack strategy against any algorithm under the general click model. Each attack algorithm manipulates the learning agent into choosing the target attack item $T-o(T)$ times, incurring a cumulative cost of $o(T)$. Experiments on synthetic and real data further validate the effectiveness of our proposed attack algorithms.

Viaarxiv icon

Time Fairness in Online Knapsack Problems

May 22, 2023
Adam Lechowicz, Rik Sengupta, Bo Sun, Shahin Kamali, Mohammad Hajiesmaili

Figure 1 for Time Fairness in Online Knapsack Problems
Figure 2 for Time Fairness in Online Knapsack Problems
Figure 3 for Time Fairness in Online Knapsack Problems
Figure 4 for Time Fairness in Online Knapsack Problems

The online knapsack problem is a classic problem in the field of online algorithms. Its canonical version asks how to pack items of different values and weights arriving online into a capacity-limited knapsack so as to maximize the total value of the admitted items. Although optimal competitive algorithms are known for this problem, they may be fundamentally unfair, i.e., individual items may be treated inequitably in different ways. Inspired by recent attention to fairness in online settings, we develop a natural and practically-relevant notion of time fairness for the online knapsack problem, and show that the existing optimal algorithms perform poorly under this metric. We propose a parameterized deterministic algorithm where the parameter precisely captures the Pareto-optimal trade-off between fairness and competitiveness. We show that randomization is theoretically powerful enough to be simultaneously competitive and fair; however, it does not work well in practice, using trace-driven experiments. To further improve the trade-off between fairness and competitiveness, we develop a fair, robust (competitive), and consistent learning-augmented algorithm with substantial performance improvement in trace-driven experiments.

* 24 pages, 5 figures 
Viaarxiv icon

Contextual Combinatorial Bandits with Probabilistically Triggered Arms

Mar 30, 2023
Xutong Liu, Jinhang Zuo, Siwei Wang, John C. S. Lui, Mohammad Hajiesmaili, Adam Wierman, Wei Chen

Figure 1 for Contextual Combinatorial Bandits with Probabilistically Triggered Arms
Figure 2 for Contextual Combinatorial Bandits with Probabilistically Triggered Arms
Figure 3 for Contextual Combinatorial Bandits with Probabilistically Triggered Arms
Figure 4 for Contextual Combinatorial Bandits with Probabilistically Triggered Arms

We study contextual combinatorial bandits with probabilistically triggered arms (C$^2$MAB-T) under a variety of smoothness conditions that capture a wide range of applications, such as contextual cascading bandits and contextual influence maximization bandits. Under the triggering probability modulated (TPM) condition, we devise the C$^2$-UCB-T algorithm and propose a novel analysis that achieves an $\tilde{O}(d\sqrt{KT})$ regret bound, removing a potentially exponentially large factor $O(1/p_{\min})$, where $d$ is the dimension of contexts, $p_{\min}$ is the minimum positive probability that any arm can be triggered, and batch-size $K$ is the maximum number of arms that can be triggered per round. Under the variance modulated (VM) or triggering probability and variance modulated (TPVM) conditions, we propose a new variance-adaptive algorithm VAC$^2$-UCB and derive a regret bound $\tilde{O}(d\sqrt{T})$, which is independent of the batch-size $K$. As a valuable by-product, we find our analysis technique and variance-adaptive algorithm can be applied to the CMAB-T and C$^2$MAB~setting, improving existing results there as well. We also include experiments that demonstrate the improved performance of our algorithms compared with benchmark algorithms on synthetic and real-world datasets.

* arXiv admin note: text overlap with arXiv:2208.14837 
Viaarxiv icon

No-regret Algorithms for Fair Resource Allocation

Mar 11, 2023
Abhishek Sinha, Ativ Joshi, Rajarshi Bhattacharjee, Cameron Musco, Mohammad Hajiesmaili

Figure 1 for No-regret Algorithms for Fair Resource Allocation
Figure 2 for No-regret Algorithms for Fair Resource Allocation
Figure 3 for No-regret Algorithms for Fair Resource Allocation
Figure 4 for No-regret Algorithms for Fair Resource Allocation

We consider a fair resource allocation problem in the no-regret setting against an unrestricted adversary. The objective is to allocate resources equitably among several agents in an online fashion so that the difference of the aggregate $\alpha$-fair utilities of the agents between an optimal static clairvoyant allocation and that of the online policy grows sub-linearly with time. The problem is challenging due to the non-additive nature of the $\alpha$-fairness function. Previously, it was shown that no online policy can exist for this problem with a sublinear standard regret. In this paper, we propose an efficient online resource allocation policy, called Online Proportional Fair (OPF), that achieves $c_\alpha$-approximate sublinear regret with the approximation factor $c_\alpha=(1-\alpha)^{-(1-\alpha)}\leq 1.445,$ for $0\leq \alpha < 1$. The upper bound to the $c_\alpha$-regret for this problem exhibits a surprising phase transition phenomenon. The regret bound changes from a power-law to a constant at the critical exponent $\alpha=\frac{1}{2}.$ As a corollary, our result also resolves an open problem raised by Even-Dar et al. [2009] on designing an efficient no-regret policy for the online job scheduling problem in certain parameter regimes. The proof of our results introduces new algorithmic and analytical techniques, including greedy estimation of the future gradients for non-additive global reward functions and bootstrapping adaptive regret bounds, which may be of independent interest.

Viaarxiv icon

On-Demand Communication for Asynchronous Multi-Agent Bandits

Feb 15, 2023
Yu-Zhen Janice Chen, Lin Yang, Xuchuang Wang, Xutong Liu, Mohammad Hajiesmaili, John C. S. Lui, Don Towsley

Figure 1 for On-Demand Communication for Asynchronous Multi-Agent Bandits
Figure 2 for On-Demand Communication for Asynchronous Multi-Agent Bandits
Figure 3 for On-Demand Communication for Asynchronous Multi-Agent Bandits
Figure 4 for On-Demand Communication for Asynchronous Multi-Agent Bandits

This paper studies a cooperative multi-agent multi-armed stochastic bandit problem where agents operate asynchronously -- agent pull times and rates are unknown, irregular, and heterogeneous -- and face the same instance of a K-armed bandit problem. Agents can share reward information to speed up the learning process at additional communication costs. We propose ODC, an on-demand communication protocol that tailors the communication of each pair of agents based on their empirical pull times. ODC is efficient when the pull times of agents are highly heterogeneous, and its communication complexity depends on the empirical pull times of agents. ODC is a generic protocol that can be integrated into most cooperative bandit algorithms without degrading their performance. We then incorporate ODC into the natural extensions of UCB and AAE algorithms and propose two communication-efficient cooperative algorithms. Our analysis shows that both algorithms are near-optimal in regret.

* Accepted by AISTATS 2023 
Viaarxiv icon

Pareto-Optimal Learning-Augmented Algorithms for Online k-Search Problems

Nov 12, 2022
Russell Lee, Bo Sun, John C. S. Lui, Mohammad Hajiesmaili

Figure 1 for Pareto-Optimal Learning-Augmented Algorithms for Online k-Search Problems
Figure 2 for Pareto-Optimal Learning-Augmented Algorithms for Online k-Search Problems
Figure 3 for Pareto-Optimal Learning-Augmented Algorithms for Online k-Search Problems
Figure 4 for Pareto-Optimal Learning-Augmented Algorithms for Online k-Search Problems

This paper leverages machine learned predictions to design online algorithms for the k-max and k-min search problems. Our algorithms can achieve performances competitive with the offline algorithm in hindsight when the predictions are accurate (i.e., consistency) and also provide worst-case guarantees when the predictions are arbitrarily wrong (i.e., robustness). Further, we show that our algorithms have attained the Pareto-optimal trade-off between consistency and robustness, where no other algorithms for k-max or k-min search can improve on the consistency for a given robustness. To demonstrate the performance of our algorithms, we evaluate them in experiments of buying and selling Bitcoin.

Viaarxiv icon

CU-Net: Efficient Point Cloud Color Upsampling Network

Sep 12, 2022
Lingdong Wang, Mohammad Hajiesmaili, Jacob Chakareski, Ramesh K. Sitaraman

Figure 1 for CU-Net: Efficient Point Cloud Color Upsampling Network
Figure 2 for CU-Net: Efficient Point Cloud Color Upsampling Network
Figure 3 for CU-Net: Efficient Point Cloud Color Upsampling Network
Figure 4 for CU-Net: Efficient Point Cloud Color Upsampling Network

Point cloud upsampling is necessary for Augmented Reality, Virtual Reality, and telepresence scenarios. Although the geometry upsampling is well studied to densify point cloud coordinates, the upsampling of colors has been largely overlooked. In this paper, we propose CU-Net, the first deep-learning point cloud color upsampling model. Leveraging a feature extractor based on sparse convolution and a color prediction module based on neural implicit function, CU-Net achieves linear time and space complexity. Therefore, CU-Net is theoretically guaranteed to be more efficient than most existing methods with quadratic complexity. Experimental results demonstrate that CU-Net can colorize a photo-realistic point cloud with nearly a million points in real time, while having better visual quality than baselines. Besides, CU-Net can adapt to an arbitrary upsampling ratio and unseen objects. Our source code will be released to the public soon.

Viaarxiv icon

Distributed Bandits with Heterogeneous Agents

Jan 23, 2022
Lin Yang, Yu-zhen Janice Chen, Mohammad Hajiesmaili, John CS Lui, Don Towsley

Figure 1 for Distributed Bandits with Heterogeneous Agents
Figure 2 for Distributed Bandits with Heterogeneous Agents

This paper tackles a multi-agent bandit setting where $M$ agents cooperate together to solve the same instance of a $K$-armed stochastic bandit problem. The agents are \textit{heterogeneous}: each agent has limited access to a local subset of arms and the agents are asynchronous with different gaps between decision-making rounds. The goal for each agent is to find its optimal local arm, and agents can cooperate by sharing their observations with others. While cooperation between agents improves the performance of learning, it comes with an additional complexity of communication between agents. For this heterogeneous multi-agent setting, we propose two learning algorithms, \ucbo and \AAE. We prove that both algorithms achieve order-optimal regret, which is $O\left(\sum_{i:\tilde{\Delta}_i>0} \log T/\tilde{\Delta}_i\right)$, where $\tilde{\Delta}_i$ is the minimum suboptimality gap between the reward mean of arm $i$ and any local optimal arm. In addition, a careful selection of the valuable information for cooperation, \AAE achieves a low communication complexity of $O(\log T)$. Last, numerical experiments verify the efficiency of both algorithms.

Viaarxiv icon

Pareto-Optimal Learning-Augmented Algorithms for Online Conversion Problems

Sep 03, 2021
Bo Sun, Russell Lee, Mohammad Hajiesmaili, Adam Wierman, Danny H. K. Tsang

Figure 1 for Pareto-Optimal Learning-Augmented Algorithms for Online Conversion Problems
Figure 2 for Pareto-Optimal Learning-Augmented Algorithms for Online Conversion Problems

This paper leverages machine-learned predictions to design competitive algorithms for online conversion problems with the goal of improving the competitive ratio when predictions are accurate (i.e., consistency), while also guaranteeing a worst-case competitive ratio regardless of the prediction quality (i.e., robustness). We unify the algorithmic design of both integral and fractional conversion problems, which are also known as the 1-max-search and one-way trading problems, into a class of online threshold-based algorithms (OTA). By incorporating predictions into design of OTA, we achieve the Pareto-optimal trade-off of consistency and robustness, i.e., no online algorithm can achieve a better consistency guarantee given for a robustness guarantee. We demonstrate the performance of OTA using numerical experiments on Bitcoin conversion.

Viaarxiv icon