Picture for Daniel M. Roy

Daniel M. Roy

University of Toronto

NeurIPS 2020 Competition: Predicting Generalization in Deep Learning

Add code
Dec 14, 2020
Figure 1 for NeurIPS 2020 Competition: Predicting Generalization in Deep Learning
Viaarxiv icon

On the Information Complexity of Proper Learners for VC Classes in the Realizable Case

Add code
Nov 05, 2020
Viaarxiv icon

Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel

Add code
Oct 28, 2020
Figure 1 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Figure 2 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Figure 3 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Figure 4 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Viaarxiv icon

Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability

Add code
Oct 28, 2020
Figure 1 for Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability
Viaarxiv icon

In Search of Robust Measures of Generalization

Add code
Oct 22, 2020
Figure 1 for In Search of Robust Measures of Generalization
Figure 2 for In Search of Robust Measures of Generalization
Figure 3 for In Search of Robust Measures of Generalization
Figure 4 for In Search of Robust Measures of Generalization
Viaarxiv icon

Pruning Neural Networks at Initialization: Why are We Missing the Mark?

Add code
Sep 18, 2020
Figure 1 for Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Figure 2 for Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Figure 3 for Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Figure 4 for Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Viaarxiv icon

Relaxing the I.I.D. Assumption: Adaptive Minimax Optimal Sequential Prediction with Expert Advice

Add code
Jul 13, 2020
Figure 1 for Relaxing the I.I.D. Assumption: Adaptive Minimax Optimal Sequential Prediction with Expert Advice
Figure 2 for Relaxing the I.I.D. Assumption: Adaptive Minimax Optimal Sequential Prediction with Expert Advice
Figure 3 for Relaxing the I.I.D. Assumption: Adaptive Minimax Optimal Sequential Prediction with Expert Advice
Viaarxiv icon

Improved Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance

Add code
Jul 02, 2020
Viaarxiv icon

On the role of data in PAC-Bayes bounds

Add code
Jun 19, 2020
Figure 1 for On the role of data in PAC-Bayes bounds
Figure 2 for On the role of data in PAC-Bayes bounds
Figure 3 for On the role of data in PAC-Bayes bounds
Figure 4 for On the role of data in PAC-Bayes bounds
Viaarxiv icon

Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms

Add code
Apr 27, 2020
Figure 1 for Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms
Figure 2 for Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms
Viaarxiv icon