Picture for Sanjeev Arora

Sanjeev Arora

On Predicting Generalization using GANs

Add code
Nov 28, 2021
Figure 1 for On Predicting Generalization using GANs
Figure 2 for On Predicting Generalization using GANs
Figure 3 for On Predicting Generalization using GANs
Figure 4 for On Predicting Generalization using GANs
Viaarxiv icon

Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias

Add code
Nov 09, 2021
Figure 1 for Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias
Figure 2 for Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias
Figure 3 for Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias
Figure 4 for Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias
Viaarxiv icon

What Happens after SGD Reaches Zero Loss? --A Mathematical Framework

Add code
Oct 13, 2021
Figure 1 for What Happens after SGD Reaches Zero Loss? --A Mathematical Framework
Viaarxiv icon

Rip van Winkle's Razor: A Simple Estimate of Overfit to Test Data

Add code
Feb 25, 2021
Figure 1 for Rip van Winkle's Razor: A Simple Estimate of Overfit to Test Data
Figure 2 for Rip van Winkle's Razor: A Simple Estimate of Overfit to Test Data
Figure 3 for Rip van Winkle's Razor: A Simple Estimate of Overfit to Test Data
Figure 4 for Rip van Winkle's Razor: A Simple Estimate of Overfit to Test Data
Viaarxiv icon

On the Validity of Modeling SGD with Stochastic Differential Equations

Add code
Feb 24, 2021
Figure 1 for On the Validity of Modeling SGD with Stochastic Differential Equations
Figure 2 for On the Validity of Modeling SGD with Stochastic Differential Equations
Figure 3 for On the Validity of Modeling SGD with Stochastic Differential Equations
Figure 4 for On the Validity of Modeling SGD with Stochastic Differential Equations
Viaarxiv icon

Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?

Add code
Oct 16, 2020
Figure 1 for Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?
Figure 2 for Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?
Viaarxiv icon

TextHide: Tackling Data Privacy in Language Understanding Tasks

Add code
Oct 12, 2020
Figure 1 for TextHide: Tackling Data Privacy in Language Understanding Tasks
Figure 2 for TextHide: Tackling Data Privacy in Language Understanding Tasks
Figure 3 for TextHide: Tackling Data Privacy in Language Understanding Tasks
Figure 4 for TextHide: Tackling Data Privacy in Language Understanding Tasks
Viaarxiv icon

A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks

Add code
Oct 07, 2020
Figure 1 for A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
Figure 2 for A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
Figure 3 for A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
Figure 4 for A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
Viaarxiv icon

Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate

Add code
Oct 06, 2020
Figure 1 for Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate
Figure 2 for Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate
Figure 3 for Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate
Figure 4 for Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate
Viaarxiv icon

InstaHide: Instance-hiding Schemes for Private Distributed Learning

Add code
Oct 06, 2020
Figure 1 for InstaHide: Instance-hiding Schemes for Private Distributed Learning
Figure 2 for InstaHide: Instance-hiding Schemes for Private Distributed Learning
Figure 3 for InstaHide: Instance-hiding Schemes for Private Distributed Learning
Figure 4 for InstaHide: Instance-hiding Schemes for Private Distributed Learning
Viaarxiv icon