Picture for Samet Oymak

Samet Oymak

Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View

Add code
Nov 16, 2020
Figure 1 for Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View
Figure 2 for Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View
Figure 3 for Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View
Figure 4 for Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View
Viaarxiv icon

Unsupervised Paraphrasing via Deep Reinforcement Learning

Add code
Jul 05, 2020
Figure 1 for Unsupervised Paraphrasing via Deep Reinforcement Learning
Figure 2 for Unsupervised Paraphrasing via Deep Reinforcement Learning
Figure 3 for Unsupervised Paraphrasing via Deep Reinforcement Learning
Figure 4 for Unsupervised Paraphrasing via Deep Reinforcement Learning
Viaarxiv icon

Statistical and Algorithmic Insights for Semi-supervised Learning with Self-training

Add code
Jun 19, 2020
Figure 1 for Statistical and Algorithmic Insights for Semi-supervised Learning with Self-training
Figure 2 for Statistical and Algorithmic Insights for Semi-supervised Learning with Self-training
Figure 3 for Statistical and Algorithmic Insights for Semi-supervised Learning with Self-training
Figure 4 for Statistical and Algorithmic Insights for Semi-supervised Learning with Self-training
Viaarxiv icon

Exploring Weight Importance and Hessian Bias in Model Pruning

Add code
Jun 19, 2020
Figure 1 for Exploring Weight Importance and Hessian Bias in Model Pruning
Figure 2 for Exploring Weight Importance and Hessian Bias in Model Pruning
Figure 3 for Exploring Weight Importance and Hessian Bias in Model Pruning
Figure 4 for Exploring Weight Importance and Hessian Bias in Model Pruning
Viaarxiv icon

On the Role of Dataset Quality and Heterogeneity in Model Confidence

Add code
Feb 23, 2020
Figure 1 for On the Role of Dataset Quality and Heterogeneity in Model Confidence
Figure 2 for On the Role of Dataset Quality and Heterogeneity in Model Confidence
Figure 3 for On the Role of Dataset Quality and Heterogeneity in Model Confidence
Figure 4 for On the Role of Dataset Quality and Heterogeneity in Model Confidence
Viaarxiv icon

Non-asymptotic and Accurate Learning of Nonlinear Dynamical Systems

Add code
Feb 20, 2020
Figure 1 for Non-asymptotic and Accurate Learning of Nonlinear Dynamical Systems
Figure 2 for Non-asymptotic and Accurate Learning of Nonlinear Dynamical Systems
Figure 3 for Non-asymptotic and Accurate Learning of Nonlinear Dynamical Systems
Viaarxiv icon

Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian

Add code
Jul 04, 2019
Figure 1 for Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Figure 2 for Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Figure 3 for Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Figure 4 for Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Viaarxiv icon

Quickly Finding the Best Linear Model in High Dimensions

Add code
Jul 03, 2019
Figure 1 for Quickly Finding the Best Linear Model in High Dimensions
Figure 2 for Quickly Finding the Best Linear Model in High Dimensions
Figure 3 for Quickly Finding the Best Linear Model in High Dimensions
Viaarxiv icon

Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks

Add code
Apr 07, 2019
Figure 1 for Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Figure 2 for Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Figure 3 for Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Figure 4 for Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Viaarxiv icon

Towards moderate overparameterization: global convergence guarantees for training shallow neural networks

Add code
Feb 12, 2019
Figure 1 for Towards moderate overparameterization: global convergence guarantees for training shallow neural networks
Figure 2 for Towards moderate overparameterization: global convergence guarantees for training shallow neural networks
Viaarxiv icon