Alert button
Picture for Chi-hua Wang

Chi-hua Wang

Alert button

Rate-Optimal Contextual Online Matching Bandit

May 07, 2022
Yuantong Li, Chi-hua Wang, Guang Cheng, Will Wei Sun

Figure 1 for Rate-Optimal Contextual Online Matching Bandit
Figure 2 for Rate-Optimal Contextual Online Matching Bandit
Figure 3 for Rate-Optimal Contextual Online Matching Bandit
Figure 4 for Rate-Optimal Contextual Online Matching Bandit

Two-sided online matching platforms have been employed in various markets. However, agents' preferences in present market are usually implicit and unknown and must be learned from data. With the growing availability of side information involved in the decision process, modern online matching methodology demands the capability to track preference dynamics for agents based on their contextual information. This motivates us to consider a novel Contextual Online Matching Bandit prOblem (COMBO), which allows dynamic preferences in matching decisions. Existing works focus on multi-armed bandit with static preference, but this is insufficient: the two-sided preference changes as along as one-side's contextual information updates, resulting in non-static matching. In this paper, we propose a Centralized Contextual - Explore Then Commit (CC-ETC) algorithm to adapt to the COMBO. CC-ETC solves online matching with dynamic preference. In theory, we show that CC-ETC achieves a sublinear regret upper bound O(log(T)) and is a rate-optimal algorithm by proving a matching lower bound. In the experiments, we demonstrate that CC-ETC is robust to variant preference schemes, dimensions of contexts, reward noise levels, and contexts variation levels.

* 43 pages, 9 figures 
Viaarxiv icon

Online Forgetting Process for Linear Regression Models

Dec 03, 2020
Yuantong Li, Chi-hua Wang, Guang Cheng

Figure 1 for Online Forgetting Process for Linear Regression Models
Figure 2 for Online Forgetting Process for Linear Regression Models
Figure 3 for Online Forgetting Process for Linear Regression Models
Figure 4 for Online Forgetting Process for Linear Regression Models

Motivated by the EU's "Right To Be Forgotten" regulation, we initiate a study of statistical data deletion problems where users' data are accessible only for a limited period of time. This setting is formulated as an online supervised learning task with \textit{constant memory limit}. We propose a deletion-aware algorithm \texttt{FIFD-OLS} for the low dimensional case, and witness a catastrophic rank swinging phenomenon due to the data deletion operation, which leads to statistical inefficiency. As a remedy, we propose the \texttt{FIFD-Adaptive Ridge} algorithm with a novel online regularization scheme, that effectively offsets the uncertainty from deletion. In theory, we provide the cumulative regret upper bound for both online forgetting algorithms. In the experiment, we showed \texttt{FIFD-Adaptive Ridge} outperforms the ridge regression algorithm with fixed regularization level, and hopefully sheds some light on more complex statistical models.

Viaarxiv icon