Alert button
Picture for Ramchandran Muthukumar

Ramchandran Muthukumar

Alert button

Sparsity-aware generalization theory for deep neural networks

Add code
Bookmark button
Alert button
Jul 04, 2023
Ramchandran Muthukumar, Jeremias Sulam

Figure 1 for Sparsity-aware generalization theory for deep neural networks
Figure 2 for Sparsity-aware generalization theory for deep neural networks
Figure 3 for Sparsity-aware generalization theory for deep neural networks
Figure 4 for Sparsity-aware generalization theory for deep neural networks
Viaarxiv icon

Adversarial robustness of sparse local Lipschitz predictors

Add code
Bookmark button
Alert button
Feb 26, 2022
Ramchandran Muthukumar, Jeremias Sulam

Figure 1 for Adversarial robustness of sparse local Lipschitz predictors
Figure 2 for Adversarial robustness of sparse local Lipschitz predictors
Figure 3 for Adversarial robustness of sparse local Lipschitz predictors
Figure 4 for Adversarial robustness of sparse local Lipschitz predictors
Viaarxiv icon

A Study of Neural Training with Non-Gradient and Noise Assisted Gradient Methods

Add code
Bookmark button
Alert button
May 08, 2020
Anirbit Mukherjee, Ramchandran Muthukumar

Figure 1 for A Study of Neural Training with Non-Gradient and Noise Assisted Gradient Methods
Figure 2 for A Study of Neural Training with Non-Gradient and Noise Assisted Gradient Methods
Figure 3 for A Study of Neural Training with Non-Gradient and Noise Assisted Gradient Methods
Figure 4 for A Study of Neural Training with Non-Gradient and Noise Assisted Gradient Methods
Viaarxiv icon

Guarantees on learning depth-2 neural networks under a data-poisoning attack

Add code
Bookmark button
Alert button
May 04, 2020
Anirbit Mukherjee, Ramchandran Muthukumar

Figure 1 for Guarantees on learning depth-2 neural networks under a data-poisoning attack
Figure 2 for Guarantees on learning depth-2 neural networks under a data-poisoning attack
Figure 3 for Guarantees on learning depth-2 neural networks under a data-poisoning attack
Figure 4 for Guarantees on learning depth-2 neural networks under a data-poisoning attack
Viaarxiv icon