Alert button
Picture for Aravindan Vijayaraghavan

Aravindan Vijayaraghavan

Alert button

Computing linear sections of varieties: quantum entanglement, tensor decompositions and beyond

Dec 07, 2022
Nathaniel Johnston, Benjamin Lovitz, Aravindan Vijayaraghavan

Figure 1 for Computing linear sections of varieties: quantum entanglement, tensor decompositions and beyond

We study the problem of finding elements in the intersection of an arbitrary conic variety in $\mathbb{F}^n$ with a given linear subspace (where $\mathbb{F}$ can be the real or complex field). This problem captures a rich family of algorithmic problems under different choices of the variety. The special case of the variety consisting of rank-1 matrices already has strong connections to central problems in different areas like quantum information theory and tensor decompositions. This problem is known to be NP-hard in the worst-case, even for the variety of rank-1 matrices. Surprisingly, despite these hardness results we give efficient algorithms that solve this problem for "typical" subspaces. Here, the subspace $U \subseteq \mathbb{F}^n$ is chosen generically of a certain dimension, potentially with some generic elements of the variety contained in it. Our main algorithmic result is a polynomial time algorithm that recovers all the elements of $U$ that lie in the variety, under some mild non-degeneracy assumptions on the variety. As corollaries, we obtain the following results: $\bullet$ Uniqueness results and polynomial time algorithms for generic instances of a broad class of low-rank decomposition problems that go beyond tensor decompositions. Here, we recover a decomposition of the form $\sum_{i=1}^R v_i \otimes w_i$, where the $v_i$ are elements of the given variety $X$. This implies new algorithmic results even in the special case of tensor decompositions. $\bullet$ Polynomial time algorithms for several entangled subspaces problems in quantum entanglement, including determining $r$-entanglement, complete entanglement, and genuine entanglement of a subspace. While all of these problems are NP-hard in the worst case, our algorithm solves them in polynomial time for generic subspaces of dimension up to a constant multiple of the maximum possible.

* 45 pages. Comments welcome! 
Viaarxiv icon

Classification Protocols with Minimal Disclosure

Sep 06, 2022
Jinshuo Dong, Jason Hartline, Aravindan Vijayaraghavan

Figure 1 for Classification Protocols with Minimal Disclosure
Figure 2 for Classification Protocols with Minimal Disclosure

We consider multi-party protocols for classification that are motivated by applications such as e-discovery in court proceedings. We identify a protocol that guarantees that the requesting party receives all responsive documents and the sending party discloses the minimal amount of non-responsive documents necessary to prove that all responsive documents have been received. This protocol can be embedded in a machine learning framework that enables automated labeling of points and the resulting multi-party protocol is equivalent to the standard one-party classification problem (if the one-party classification problem satisfies a natural independence-of-irrelevant-alternatives property). Our formal guarantees focus on the case where there is a linear classifier that correctly partitions the documents.

* In Proceedings of the 2022 Symposium on Computer Science and Law (CSLAW '22), November 1-2, 2022, Washington, DC, USA. ACM, New York, NY, USA, 10 pages  
Viaarxiv icon

Agnostic Learning of General ReLU Activation Using Gradient Descent

Aug 04, 2022
Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan

We provide a convergence analysis of gradient descent for the problem of agnostically learning a single ReLU function under Gaussian distributions. Unlike prior work that studies the setting of zero bias, we consider the more challenging scenario when the bias of the ReLU function is non-zero. Our main result establishes that starting from random initialization, in a polynomial number of iterations gradient descent outputs, with high probability, a ReLU function that achieves a competitive error guarantee when compared to the error of the best ReLU function. We also provide finite sample guarantees, and these techniques generalize to a broader class of marginal distributions beyond Gaussians.

* 28 oages 
Viaarxiv icon

Training Subset Selection for Weak Supervision

Jun 06, 2022
Hunter Lang, Aravindan Vijayaraghavan, David Sontag

Figure 1 for Training Subset Selection for Weak Supervision
Figure 2 for Training Subset Selection for Weak Supervision
Figure 3 for Training Subset Selection for Weak Supervision
Figure 4 for Training Subset Selection for Weak Supervision

Existing weak supervision approaches use all the data covered by weak signals to train a classifier. We show both theoretically and empirically that this is not always optimal. Intuitively, there is a tradeoff between the amount of weakly-labeled data and the precision of the weak labels. We explore this tradeoff by combining pretrained data representations with the cut statistic (Muhlenbach et al., 2004) to select (hopefully) high-quality subsets of the weakly-labeled training data. Subset selection applies to any label model and classifier and is very simple to plug in to existing weak supervision pipelines, requiring just a few lines of code. We show our subset selection method improves the performance of weak supervision for a wide range of label models, classifiers, and datasets. Using less weakly-labeled data improves the accuracy of weak supervision pipelines by up to 19% (absolute) on benchmark tasks.

Viaarxiv icon

Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations

Aug 01, 2021
Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan

We present polynomial time and sample efficient algorithms for learning an unknown depth-2 feedforward neural network with general ReLU activations, under mild non-degeneracy assumptions. In particular, we consider learning an unknown network of the form $f(x) = {a}^{\mathsf{T}}\sigma({W}^\mathsf{T}x+b)$, where $x$ is drawn from the Gaussian distribution, and $\sigma(t) := \max(t,0)$ is the ReLU activation. Prior works for learning networks with ReLU activations assume that the bias $b$ is zero. In order to deal with the presence of the bias terms, our proposed algorithm consists of robustly decomposing multiple higher order tensors arising from the Hermite expansion of the function $f(x)$. Using these ideas we also establish identifiability of the network parameters under minimal assumptions.

* 45 pages (including appendix). This version fixes an error in the previous version of the paper 
Viaarxiv icon

Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

Feb 26, 2021
Hunter Lang, Aravind Reddy, David Sontag, Aravindan Vijayaraghavan

Figure 1 for Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances
Figure 2 for Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances
Figure 3 for Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances
Figure 4 for Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation. However, most of these works give few (or no) guarantees for the LP solutions on instances that do not satisfy the relatively strict perturbation stability definitions. In this work, we go beyond these stability results by showing that the LP approximately recovers the MAP solution of a stable instance even after the instance is corrupted by noise. This "noisy stable" model realistically fits with practical MAP inference problems: we design an algorithm for finding "close" stable instances, and show that several real-world instances from computer vision have nearby instances that are perturbation stable. These results suggest a new theoretical explanation for the excellent performance of this LP relaxation in practice.

* 25 pages, 2 figures, 2 tables. To appear in AISTATS 2021 
Viaarxiv icon

Graph cuts always find a global optimum (with a catch)

Nov 07, 2020
Hunter Lang, David Sontag, Aravindan Vijayaraghavan

Figure 1 for Graph cuts always find a global optimum (with a catch)
Figure 2 for Graph cuts always find a global optimum (with a catch)
Figure 3 for Graph cuts always find a global optimum (with a catch)

We prove that the alpha-expansion algorithm for MAP inference always returns a globally optimal assignment for Markov Random Fields with Potts pairwise potentials, with a catch: the returned assignment is only guaranteed to be optimal in a small perturbation of the original problem instance. In other words, all local minima with respect to expansion moves are global minima to slightly perturbed versions of the problem. On "real-world" instances, MAP assignments of small perturbations of the problem should be very similar to the MAP assignment(s) of the original problem instance. We design an algorithm that can certify whether this is the case in practice. On several MAP inference problem instances from computer vision, this algorithm certifies that MAP solutions to all of these perturbations are very close to solutions of the original instance. These results taken together give a cohesive explanation for the good performance of "graph cuts" algorithms in practice. Every local expansion minimum is a global minimum in a small perturbation of the problem, and all of these global minima are close to the original solution.

* 16 pages, 2 figures 
Viaarxiv icon

Adversarial robustness via robust low rank representations

Aug 01, 2020
Pranjal Awasthi, Himanshu Jain, Ankit Singh Rawat, Aravindan Vijayaraghavan

Figure 1 for Adversarial robustness via robust low rank representations
Figure 2 for Adversarial robustness via robust low rank representations
Figure 3 for Adversarial robustness via robust low rank representations
Figure 4 for Adversarial robustness via robust low rank representations

Adversarial robustness measures the susceptibility of a classifier to imperceptible perturbations made to the inputs at test time. In this work we highlight the benefits of natural low rank representations that often exist for real data such as images, for training neural networks with certified robustness guarantees. Our first contribution is for certified robustness to perturbations measured in $\ell_2$ norm. We exploit low rank data representations to provide improved guarantees over state-of-the-art randomized smoothing-based approaches on standard benchmark datasets such as CIFAR-10 and CIFAR-100. Our second contribution is for the more challenging setting of certified robustness to perturbations measured in $\ell_\infty$ norm. We demonstrate empirically that natural low rank representations have inherent robustness properties, that can be leveraged to provide significantly better guarantees for certified robustness to $\ell_\infty$ perturbations in those representations. Our certificate of $\ell_\infty$ robustness relies on a natural quantity involving the $\infty \to 2$ matrix operator norm associated with the representation, to translate robustness guarantees from $\ell_2$ to $\ell_\infty$ perturbations. A key technical ingredient for our certification guarantees is a fast algorithm with provable guarantees based on the multiplicative weights update method to provide upper bounds on the above matrix norm. Our algorithmic guarantees improve upon the state of the art for this problem, and may be of independent interest.

* fixed a bug in the proof of Proposition B.2 
Viaarxiv icon

Efficient Tensor Decomposition

Jul 30, 2020
Aravindan Vijayaraghavan

Figure 1 for Efficient Tensor Decomposition
Figure 2 for Efficient Tensor Decomposition

This chapter studies the problem of decomposing a tensor into a sum of constituent rank one tensors. While tensor decompositions are very useful in designing learning algorithms and data analysis, they are NP-hard in the worst-case. We will see how to design efficient algorithms with provable guarantees under mild assumptions, and using beyond worst-case frameworks like smoothed analysis.

* Chapter 19 of the book "Beyond the Worst-Case Analysis of Algorithms", edited by Tim Roughgarden and published by Cambridge University Press (2020). We hope to occasionally update the survey here to include discussions of new results and advances 
Viaarxiv icon