Alert button
Picture for Mahdi Soltanolkotabi

Mahdi Soltanolkotabi

Alert button

Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem

Add code
Bookmark button
Alert button
Dec 26, 2019
Hesameddin Mohammadi, Armin Zare, Mahdi Soltanolkotabi, Mihailo R. Jovanović

Figure 1 for Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Figure 2 for Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Figure 3 for Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Figure 4 for Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Viaarxiv icon

Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators

Add code
Bookmark button
Alert button
Oct 31, 2019
Reinhard Heckel, Mahdi Soltanolkotabi

Figure 1 for Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Figure 2 for Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Figure 3 for Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Figure 4 for Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Viaarxiv icon

Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian

Add code
Bookmark button
Alert button
Jul 04, 2019
Samet Oymak, Zalan Fabian, Mingchen Li, Mahdi Soltanolkotabi

Figure 1 for Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Figure 2 for Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Figure 3 for Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Figure 4 for Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Viaarxiv icon

Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks

Add code
Bookmark button
Alert button
Apr 07, 2019
Mingchen Li, Mahdi Soltanolkotabi, Samet Oymak

Figure 1 for Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Figure 2 for Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Figure 3 for Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Figure 4 for Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Viaarxiv icon

Towards moderate overparameterization: global convergence guarantees for training shallow neural networks

Add code
Bookmark button
Alert button
Feb 12, 2019
Samet Oymak, Mahdi Soltanolkotabi

Figure 1 for Towards moderate overparameterization: global convergence guarantees for training shallow neural networks
Figure 2 for Towards moderate overparameterization: global convergence guarantees for training shallow neural networks
Viaarxiv icon

Fitting ReLUs via SGD and Quantized SGD

Add code
Bookmark button
Alert button
Jan 19, 2019
Seyed Mohammadreza Mousavi Kalan, Mahdi Soltanolkotabi, A. Salman Avestimehr

Figure 1 for Fitting ReLUs via SGD and Quantized SGD
Figure 2 for Fitting ReLUs via SGD and Quantized SGD
Figure 3 for Fitting ReLUs via SGD and Quantized SGD
Figure 4 for Fitting ReLUs via SGD and Quantized SGD
Viaarxiv icon

Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?

Add code
Bookmark button
Alert button
Dec 25, 2018
Samet Oymak, Mahdi Soltanolkotabi

Figure 1 for Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Figure 2 for Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Figure 3 for Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Viaarxiv icon

Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding

Add code
Bookmark button
Alert button
May 24, 2018
Songze Li, Seyed Mohammadreza Mousavi Kalan, Qian Yu, Mahdi Soltanolkotabi, A. Salman Avestimehr

Figure 1 for Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding
Figure 2 for Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding
Figure 3 for Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding
Figure 4 for Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding
Viaarxiv icon

End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition

Add code
Bookmark button
Alert button
May 16, 2018
Samet Oymak, Mahdi Soltanolkotabi

Figure 1 for End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition
Figure 2 for End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition
Figure 3 for End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition
Figure 4 for End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition
Viaarxiv icon