Picture for Tengyu Ma

Tengyu Ma

Polynomial-time Tensor Decompositions with Sum-of-Squares

Add code
Oct 06, 2016
Viaarxiv icon

Gradient Descent Learns Linear Dynamical Systems

Add code
Sep 16, 2016
Figure 1 for Gradient Descent Learns Linear Dynamical Systems
Figure 2 for Gradient Descent Learns Linear Dynamical Systems
Viaarxiv icon

RAND-WALK: A Latent Variable Model Approach to Word Embeddings

Add code
Jul 22, 2016
Figure 1 for RAND-WALK: A Latent Variable Model Approach to Word Embeddings
Figure 2 for RAND-WALK: A Latent Variable Model Approach to Word Embeddings
Figure 3 for RAND-WALK: A Latent Variable Model Approach to Word Embeddings
Figure 4 for RAND-WALK: A Latent Variable Model Approach to Word Embeddings
Viaarxiv icon

Provable Algorithms for Inference in Topic Models

Add code
May 27, 2016
Figure 1 for Provable Algorithms for Inference in Topic Models
Figure 2 for Provable Algorithms for Inference in Topic Models
Figure 3 for Provable Algorithms for Inference in Topic Models
Figure 4 for Provable Algorithms for Inference in Topic Models
Viaarxiv icon

Communication Lower Bounds for Statistical Estimation Problems via a Distributed Data Processing Inequality

Add code
May 10, 2016
Viaarxiv icon

Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity

Add code
Jan 06, 2016
Figure 1 for Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity
Figure 2 for Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity
Figure 3 for Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity
Figure 4 for Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity
Viaarxiv icon

Why are deep nets reversible: A simple theory, with implications for training

Add code
Nov 19, 2015
Figure 1 for Why are deep nets reversible: A simple theory, with implications for training
Figure 2 for Why are deep nets reversible: A simple theory, with implications for training
Figure 3 for Why are deep nets reversible: A simple theory, with implications for training
Figure 4 for Why are deep nets reversible: A simple theory, with implications for training
Viaarxiv icon

Sum-of-Squares Lower Bounds for Sparse PCA

Add code
Oct 18, 2015
Viaarxiv icon

Decomposing Overcomplete 3rd Order Tensors using Sum-of-Squares Algorithms

Add code
Apr 21, 2015
Viaarxiv icon

Simple, Efficient, and Neural Algorithms for Sparse Coding

Add code
Mar 02, 2015
Figure 1 for Simple, Efficient, and Neural Algorithms for Sparse Coding
Viaarxiv icon