Picture for Gal Kaplun

Gal Kaplun

Corgi^2: A Hybrid Offline-Online Approach To Storage-Aware Data Shuffling For SGD

Add code
Sep 04, 2023
Figure 1 for Corgi^2: A Hybrid Offline-Online Approach To Storage-Aware Data Shuffling For SGD
Figure 2 for Corgi^2: A Hybrid Offline-Online Approach To Storage-Aware Data Shuffling For SGD
Figure 3 for Corgi^2: A Hybrid Offline-Online Approach To Storage-Aware Data Shuffling For SGD
Figure 4 for Corgi^2: A Hybrid Offline-Online Approach To Storage-Aware Data Shuffling For SGD
Viaarxiv icon

Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning

Add code
Jun 14, 2023
Figure 1 for Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning
Figure 2 for Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning
Figure 3 for Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning
Figure 4 for Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning
Viaarxiv icon

SubTuning: Efficient Finetuning for Multi-Task Learning

Add code
Feb 14, 2023
Figure 1 for SubTuning: Efficient Finetuning for Multi-Task Learning
Figure 2 for SubTuning: Efficient Finetuning for Multi-Task Learning
Figure 3 for SubTuning: Efficient Finetuning for Multi-Task Learning
Figure 4 for SubTuning: Efficient Finetuning for Multi-Task Learning
Viaarxiv icon

Knowledge Distillation: Bad Models Can Be Good Role Models

Add code
Mar 28, 2022
Figure 1 for Knowledge Distillation: Bad Models Can Be Good Role Models
Figure 2 for Knowledge Distillation: Bad Models Can Be Good Role Models
Viaarxiv icon

Deconstructing Distributions: A Pointwise Framework of Learning

Add code
Feb 20, 2022
Figure 1 for Deconstructing Distributions: A Pointwise Framework of Learning
Figure 2 for Deconstructing Distributions: A Pointwise Framework of Learning
Figure 3 for Deconstructing Distributions: A Pointwise Framework of Learning
Figure 4 for Deconstructing Distributions: A Pointwise Framework of Learning
Viaarxiv icon

For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions

Add code
Mar 11, 2021
Figure 1 for For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions
Figure 2 for For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions
Figure 3 for For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions
Figure 4 for For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions
Viaarxiv icon

For self-supervised learning, Rationality implies generalization, provably

Add code
Oct 16, 2020
Figure 1 for For self-supervised learning, Rationality implies generalization, provably
Figure 2 for For self-supervised learning, Rationality implies generalization, provably
Figure 3 for For self-supervised learning, Rationality implies generalization, provably
Figure 4 for For self-supervised learning, Rationality implies generalization, provably
Viaarxiv icon

Robustness from Simple Classifiers

Add code
Feb 21, 2020
Figure 1 for Robustness from Simple Classifiers
Figure 2 for Robustness from Simple Classifiers
Figure 3 for Robustness from Simple Classifiers
Figure 4 for Robustness from Simple Classifiers
Viaarxiv icon

Deep Double Descent: Where Bigger Models and More Data Hurt

Add code
Dec 04, 2019
Figure 1 for Deep Double Descent: Where Bigger Models and More Data Hurt
Figure 2 for Deep Double Descent: Where Bigger Models and More Data Hurt
Figure 3 for Deep Double Descent: Where Bigger Models and More Data Hurt
Figure 4 for Deep Double Descent: Where Bigger Models and More Data Hurt
Viaarxiv icon

SGD on Neural Networks Learns Functions of Increasing Complexity

Add code
May 28, 2019
Figure 1 for SGD on Neural Networks Learns Functions of Increasing Complexity
Figure 2 for SGD on Neural Networks Learns Functions of Increasing Complexity
Figure 3 for SGD on Neural Networks Learns Functions of Increasing Complexity
Figure 4 for SGD on Neural Networks Learns Functions of Increasing Complexity
Viaarxiv icon