Alert button
Picture for Qi Lei

Qi Lei

Alert button

Cluster-aware Semi-supervised Learning: Relational Knowledge Distillation Provably Learns Clustering

Jul 20, 2023
Yijun Dong, Kevin Miller, Qi Lei, Rachel Ward

Despite the empirical success and practical significance of (relational) knowledge distillation that matches (the relations of) features between teacher and student models, the corresponding theoretical interpretations remain limited for various knowledge distillation paradigms. In this work, we take an initial step toward a theoretical understanding of relational knowledge distillation (RKD), with a focus on semi-supervised classification problems. We start by casting RKD as spectral clustering on a population-induced graph unveiled by a teacher model. Via a notion of clustering error that quantifies the discrepancy between the predicted and ground truth clusterings, we illustrate that RKD over the population provably leads to low clustering error. Moreover, we provide a sample complexity bound for RKD with limited unlabeled samples. For semi-supervised learning, we further demonstrate the label efficiency of RKD through a general framework of cluster-aware semi-supervised learning that assumes low clustering errors. Finally, by unifying data augmentation consistency regularization into this cluster-aware framework, we show that despite the common effect of learning accurate clusterings, RKD facilitates a "global" perspective through spectral clustering, whereas consistency regularization focuses on a "local" perspective via expansion.

Viaarxiv icon

Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and Optimal Algorithms

Jun 22, 2023
Qian Yu, Yining Wang, Baihe Huang, Qi Lei, Jason D. Lee

In stochastic zeroth-order optimization, a problem of practical relevance is understanding how to fully exploit the local geometry of the underlying objective function. We consider a fundamental setting in which the objective function is quadratic, and provide the first tight characterization of the optimal Hessian-dependent sample complexity. Our contribution is twofold. First, from an information-theoretic point of view, we prove tight lower bounds on Hessian-dependent complexities by introducing a concept called energy allocation, which captures the interaction between the searching algorithm and the geometry of objective functions. A matching upper bound is obtained by solving the optimal energy spectrum. Then, algorithmically, we show the existence of a Hessian-independent algorithm that universally achieves the asymptotic optimal sample complexities for all Hessian instances. The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions, which are enabled by a truncation method.

Viaarxiv icon

Reconstructing Training Data from Model Gradient, Provably

Dec 07, 2022
Zihan Wang, Jason Lee, Qi Lei

Figure 1 for Reconstructing Training Data from Model Gradient, Provably

Understanding when and how much a model gradient leaks information about the training sample is an important question in privacy. In this paper, we present a surprising result: even without training or memorizing the data, we can fully reconstruct the training samples from a single gradient query at a randomly chosen parameter value. We prove the identifiability of the training data under mild conditions: with shallow or deep neural networks and a wide range of activation functions. We also present a statistically and computationally efficient algorithm based on tensor decomposition to reconstruct the training data. As a provable attack that reveals sensitive training data, our findings suggest potential severe threats to privacy, especially in federated learning.

Viaarxiv icon

COEP: Cascade Optimization for Inverse Problems with Entropy-Preserving Hyperparameter Tuning

Oct 26, 2022
Tianci Liu, Tong Yang, Quan Zhang, Qi Lei

Figure 1 for COEP: Cascade Optimization for Inverse Problems with Entropy-Preserving Hyperparameter Tuning
Figure 2 for COEP: Cascade Optimization for Inverse Problems with Entropy-Preserving Hyperparameter Tuning
Figure 3 for COEP: Cascade Optimization for Inverse Problems with Entropy-Preserving Hyperparameter Tuning
Figure 4 for COEP: Cascade Optimization for Inverse Problems with Entropy-Preserving Hyperparameter Tuning

We propose COEP, an automated and principled framework to solve inverse problems with deep generative models. COEP consists of two components, a cascade algorithm for optimization and an entropy-preserving criterion for hyperparameter tuning. Through COEP, the two components build up an efficient and end-to-end solver for inverse problems that require no human evaluation. We establish theoretical guarantees for the proposed methods. We also empirically validate the strength of COEP on denoising and noisy compressed sensing, which are two fundamental tasks in inverse problems.

Viaarxiv icon

Efficient Medical Image Assessment via Self-supervised Learning

Sep 28, 2022
Chun-Yin Huang, Qi Lei, Xiaoxiao Li

High-performance deep learning methods typically rely on large annotated training datasets, which are difficult to obtain in many clinical applications due to the high cost of medical image labeling. Existing data assessment methods commonly require knowing the labels in advance, which are not feasible to achieve our goal of 'knowing which data to label.' To this end, we formulate and propose a novel and efficient data assessment strategy, EXponentiAl Marginal sINgular valuE (EXAMINE) score, to rank the quality of unlabeled medical image data based on their useful latent representations extracted via Self-supervised Learning (SSL) networks. Motivated by theoretical implication of SSL embedding space, we leverage a Masked Autoencoder for feature extraction. Furthermore, we evaluate data quality based on the marginal change of the largest singular value after excluding the data point in the dataset. We conduct extensive experiments on a pathology dataset. Our results indicate the effectiveness and efficiency of our proposed methods for selecting the most valuable data to label.

Viaarxiv icon

Nearly Minimax Algorithms for Linear Bandits with Shared Representation

Mar 29, 2022
Jiaqi Yang, Qi Lei, Jason D. Lee, Simon S. Du

Figure 1 for Nearly Minimax Algorithms for Linear Bandits with Shared Representation
Figure 2 for Nearly Minimax Algorithms for Linear Bandits with Shared Representation
Figure 3 for Nearly Minimax Algorithms for Linear Bandits with Shared Representation

We give novel algorithms for multi-task and lifelong linear bandits with shared representation. Specifically, we consider the setting where we play $M$ linear bandits with dimension $d$, each for $T$ rounds, and these $M$ bandit tasks share a common $k(\ll d)$ dimensional linear representation. For both the multi-task setting where we play the tasks concurrently, and the lifelong setting where we play tasks sequentially, we come up with novel algorithms that achieve $\widetilde{O}\left(d\sqrt{kMT} + kM\sqrt{T}\right)$ regret bounds, which matches the known minimax regret lower bound up to logarithmic factors and closes the gap in existing results [Yang et al., 2021]. Our main technique include a more efficient estimator for the low-rank linear feature extractor and an accompanied novel analysis for this estimator.

* 19 pages, 3 figures 
Viaarxiv icon

Sample Efficiency of Data Augmentation Consistency Regularization

Feb 24, 2022
Shuo Yang, Yijun Dong, Rachel Ward, Inderjit S. Dhillon, Sujay Sanghavi, Qi Lei

Figure 1 for Sample Efficiency of Data Augmentation Consistency Regularization
Figure 2 for Sample Efficiency of Data Augmentation Consistency Regularization
Figure 3 for Sample Efficiency of Data Augmentation Consistency Regularization
Figure 4 for Sample Efficiency of Data Augmentation Consistency Regularization

Data augmentation is popular in the training of large neural networks; currently, however, there is no clear theoretical comparison between different algorithmic choices on how to use augmented data. In this paper, we take a step in this direction - we first present a simple and novel analysis for linear regression, demonstrating that data augmentation consistency (DAC) is intrinsically more efficient than empirical risk minimization on augmented data (DA-ERM). We then propose a new theoretical framework for analyzing DAC, which reframes DAC as a way to reduce function class complexity. The new framework characterizes the sample efficiency of DAC for various non-linear models (e.g., neural networks). Further, we perform experiments that make a clean and apples-to-apples comparison (i.e., with no extra modeling or data tweaks) between ERM and consistency regularization using CIFAR-100 and WideResNet; these together demonstrate the superior efficacy of DAC.

Viaarxiv icon

Bi-CLKT: Bi-Graph Contrastive Learning based Knowledge Tracing

Jan 22, 2022
Xiangyu Song, Jianxin Li, Qi Lei, Wei Zhao, Yunliang Chen, Ajmal Mian

Figure 1 for Bi-CLKT: Bi-Graph Contrastive Learning based Knowledge Tracing
Figure 2 for Bi-CLKT: Bi-Graph Contrastive Learning based Knowledge Tracing
Figure 3 for Bi-CLKT: Bi-Graph Contrastive Learning based Knowledge Tracing
Figure 4 for Bi-CLKT: Bi-Graph Contrastive Learning based Knowledge Tracing

The goal of Knowledge Tracing (KT) is to estimate how well students have mastered a concept based on their historical learning of related exercises. The benefit of knowledge tracing is that students' learning plans can be better organised and adjusted, and interventions can be made when necessary. With the recent rise of deep learning, Deep Knowledge Tracing (DKT) has utilised Recurrent Neural Networks (RNNs) to accomplish this task with some success. Other works have attempted to introduce Graph Neural Networks (GNNs) and redefine the task accordingly to achieve significant improvements. However, these efforts suffer from at least one of the following drawbacks: 1) they pay too much attention to details of the nodes rather than to high-level semantic information; 2) they struggle to effectively establish spatial associations and complex structures of the nodes; and 3) they represent either concepts or exercises only, without integrating them. Inspired by recent advances in self-supervised learning, we propose a Bi-Graph Contrastive Learning based Knowledge Tracing (Bi-CLKT) to address these limitations. Specifically, we design a two-layer contrastive learning scheme based on an "exercise-to-exercise" (E2E) relational subgraph. It involves node-level contrastive learning of subgraphs to obtain discriminative representations of exercises, and graph-level contrastive learning to obtain discriminative representations of concepts. Moreover, we designed a joint contrastive loss to obtain better representations and hence better prediction performance. Also, we explored two different variants, using RNN and memory-augmented neural networks as the prediction layer for comparison to obtain better representations of exercises and concepts respectively. Extensive experiments on four real-world datasets show that the proposed Bi-CLKT and its variants outperform other baseline models.

* 12pages, 2 figures 
Viaarxiv icon

Provable Hierarchy-Based Meta-Reinforcement Learning

Oct 18, 2021
Kurtland Chua, Qi Lei, Jason D. Lee

Figure 1 for Provable Hierarchy-Based Meta-Reinforcement Learning
Figure 2 for Provable Hierarchy-Based Meta-Reinforcement Learning
Figure 3 for Provable Hierarchy-Based Meta-Reinforcement Learning
Figure 4 for Provable Hierarchy-Based Meta-Reinforcement Learning

Hierarchical reinforcement learning (HRL) has seen widespread interest as an approach to tractable learning of complex modular behaviors. However, existing work either assume access to expert-constructed hierarchies, or use hierarchy-learning heuristics with no provable guarantees. To address this gap, we analyze HRL in the meta-RL setting, where a learner learns latent hierarchical structure during meta-training for use in a downstream task. We consider a tabular setting where natural hierarchical structure is embedded in the transition dynamics. Analogous to supervised meta-learning theory, we provide "diversity conditions" which, together with a tractable optimism-based algorithm, guarantee sample-efficient recovery of this natural hierarchy. Furthermore, we provide regret bounds on a learner using the recovered hierarchy to solve a meta-test task. Our bounds incorporate common notions in HRL literature such as temporal and state/action abstractions, suggesting that our setting and analysis capture important features of HRL in practice.

Viaarxiv icon