Alert button
Picture for Murat A. Erdogdu

Murat A. Erdogdu

Alert button

Generalization Bounds for Stochastic Gradient Descent via Localized $\varepsilon$-Covers

Add code
Bookmark button
Alert button
Sep 19, 2022
Sejun Park, Umut Şimşekli, Murat A. Erdogdu

Figure 1 for Generalization Bounds for Stochastic Gradient Descent via Localized $\varepsilon$-Covers
Figure 2 for Generalization Bounds for Stochastic Gradient Descent via Localized $\varepsilon$-Covers
Figure 3 for Generalization Bounds for Stochastic Gradient Descent via Localized $\varepsilon$-Covers
Viaarxiv icon

$p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations

Add code
Bookmark button
Alert button
Jul 25, 2022
Adam Dziedzic, Stephan Rabanser, Mohammad Yaghini, Armin Ale, Murat A. Erdogdu, Nicolas Papernot

Figure 1 for $p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations
Figure 2 for $p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations
Figure 3 for $p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations
Figure 4 for $p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations
Viaarxiv icon

High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation

Add code
Bookmark button
Alert button
May 03, 2022
Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, Greg Yang

Figure 1 for High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Figure 2 for High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Figure 3 for High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Figure 4 for High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Viaarxiv icon

Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance

Add code
Bookmark button
Alert button
Feb 23, 2022
Nuri Mert Vural, Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu

Figure 1 for Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance
Viaarxiv icon

Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo

Add code
Bookmark button
Alert button
Feb 10, 2022
Krishnakumar Balasubramanian, Sinho Chewi, Murat A. Erdogdu, Adil Salim, Matthew Zhang

Viaarxiv icon

Heavy-tailed Sampling via Transformed Unadjusted Langevin Algorithm

Add code
Bookmark button
Alert button
Jan 20, 2022
Ye He, Krishnakumar Balasubramanian, Murat A. Erdogdu

Figure 1 for Heavy-tailed Sampling via Transformed Unadjusted Langevin Algorithm
Viaarxiv icon

Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev

Add code
Bookmark button
Alert button
Dec 23, 2021
Sinho Chewi, Murat A. Erdogdu, Mufan Bill Li, Ruoqi Shen, Matthew Zhang

Figure 1 for Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev
Figure 2 for Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev
Viaarxiv icon

On Empirical Risk Minimization with Dependent and Heavy-Tailed Data

Add code
Bookmark button
Alert button
Sep 10, 2021
Abhishek Roy, Krishnakumar Balasubramanian, Murat A. Erdogdu

Viaarxiv icon

Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms

Add code
Bookmark button
Alert button
Jun 09, 2021
Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gürbüzbalaban, Umut Şimşekli, Lingjiong Zhu

Figure 1 for Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms
Figure 2 for Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms
Figure 3 for Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms
Figure 4 for Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms
Viaarxiv icon