Alert button
Picture for Daniel M. Roy

Daniel M. Roy

Alert button

University of Toronto

Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers

Add code
Bookmark button
Alert button
Nov 07, 2021
Jeffrey Negrea, Blair Bilodeau, Nicolò Campolongo, Francesco Orabona, Daniel M. Roy

Figure 1 for Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers
Figure 2 for Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers
Viaarxiv icon

The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization

Add code
Bookmark button
Alert button
Jun 07, 2021
Mufan Bill Li, Mihai Nica, Daniel M. Roy

Figure 1 for The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Figure 2 for The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Figure 3 for The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Figure 4 for The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width Limit at Initialization
Viaarxiv icon

NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization

Add code
Bookmark button
Alert button
May 01, 2021
Ali Ramezani-Kebrya, Fartash Faghri, Ilya Markov, Vitalii Aksenov, Dan Alistarh, Daniel M. Roy

Figure 1 for NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Figure 2 for NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Figure 3 for NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Figure 4 for NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Viaarxiv icon

NeurIPS 2020 Competition: Predicting Generalization in Deep Learning

Add code
Bookmark button
Alert button
Dec 14, 2020
Yiding Jiang, Pierre Foret, Scott Yak, Daniel M. Roy, Hossein Mobahi, Gintare Karolina Dziugaite, Samy Bengio, Suriya Gunasekar, Isabelle Guyon, Behnam Neyshabur

Figure 1 for NeurIPS 2020 Competition: Predicting Generalization in Deep Learning
Viaarxiv icon

On the Information Complexity of Proper Learners for VC Classes in the Realizable Case

Add code
Bookmark button
Alert button
Nov 05, 2020
Mahdi Haghifam, Gintare Karolina Dziugaite, Shay Moran, Daniel M. Roy

Viaarxiv icon

Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel

Add code
Bookmark button
Alert button
Oct 28, 2020
Stanislav Fort, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M. Roy, Surya Ganguli

Figure 1 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Figure 2 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Figure 3 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Figure 4 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Viaarxiv icon

Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability

Add code
Bookmark button
Alert button
Oct 28, 2020
Gintare Karolina Dziugaite, Shai Ben-David, Daniel M. Roy

Figure 1 for Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability
Viaarxiv icon

In Search of Robust Measures of Generalization

Add code
Bookmark button
Alert button
Oct 22, 2020
Gintare Karolina Dziugaite, Alexandre Drouin, Brady Neal, Nitarshan Rajkumar, Ethan Caballero, Linbo Wang, Ioannis Mitliagkas, Daniel M. Roy

Figure 1 for In Search of Robust Measures of Generalization
Figure 2 for In Search of Robust Measures of Generalization
Figure 3 for In Search of Robust Measures of Generalization
Figure 4 for In Search of Robust Measures of Generalization
Viaarxiv icon

Pruning Neural Networks at Initialization: Why are We Missing the Mark?

Add code
Bookmark button
Alert button
Sep 18, 2020
Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, Michael Carbin

Figure 1 for Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Figure 2 for Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Figure 3 for Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Figure 4 for Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Viaarxiv icon

Relaxing the I.I.D. Assumption: Adaptive Minimax Optimal Sequential Prediction with Expert Advice

Add code
Bookmark button
Alert button
Jul 13, 2020
Blair Bilodeau, Jeffrey Negrea, Daniel M. Roy

Figure 1 for Relaxing the I.I.D. Assumption: Adaptive Minimax Optimal Sequential Prediction with Expert Advice
Figure 2 for Relaxing the I.I.D. Assumption: Adaptive Minimax Optimal Sequential Prediction with Expert Advice
Figure 3 for Relaxing the I.I.D. Assumption: Adaptive Minimax Optimal Sequential Prediction with Expert Advice
Viaarxiv icon