Alert button
Picture for Charles H. Martin

Charles H. Martin

Alert button

Serena

Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training

Add code
Bookmark button
Alert button
Dec 01, 2023
Yefan Zhou, Tianyu Pang, Keqin Liu, Charles H. Martin, Michael W. Mahoney, Yaoqing Yang

Viaarxiv icon

Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data

Add code
Bookmark button
Alert button
Feb 06, 2022
Yaoqing Yang, Ryan Theisen, Liam Hodgkinson, Joseph E. Gonzalez, Kannan Ramchandran, Charles H. Martin, Michael W. Mahoney

Figure 1 for Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data
Figure 2 for Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data
Figure 3 for Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data
Figure 4 for Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data
Viaarxiv icon

Post-mortem on a deep learning contest: a Simpson's paradox and the complementary roles of scale metrics versus shape metrics

Add code
Bookmark button
Alert button
Jun 01, 2021
Charles H. Martin, Michael W. Mahoney

Figure 1 for Post-mortem on a deep learning contest: a Simpson's paradox and the complementary roles of scale metrics versus shape metrics
Figure 2 for Post-mortem on a deep learning contest: a Simpson's paradox and the complementary roles of scale metrics versus shape metrics
Figure 3 for Post-mortem on a deep learning contest: a Simpson's paradox and the complementary roles of scale metrics versus shape metrics
Figure 4 for Post-mortem on a deep learning contest: a Simpson's paradox and the complementary roles of scale metrics versus shape metrics
Viaarxiv icon

Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data

Add code
Bookmark button
Alert button
Feb 17, 2020
Charles H. Martin, Tongsu, Peng, Michael W. Mahoney

Figure 1 for Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data
Figure 2 for Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data
Figure 3 for Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data
Figure 4 for Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data
Viaarxiv icon

Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks

Add code
Bookmark button
Alert button
Jan 24, 2019
Charles H. Martin, Michael W. Mahoney

Figure 1 for Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks
Figure 2 for Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks
Figure 3 for Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks
Figure 4 for Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks
Viaarxiv icon

Traditional and Heavy-Tailed Self Regularization in Neural Network Models

Add code
Bookmark button
Alert button
Jan 24, 2019
Charles H. Martin, Michael W. Mahoney

Figure 1 for Traditional and Heavy-Tailed Self Regularization in Neural Network Models
Figure 2 for Traditional and Heavy-Tailed Self Regularization in Neural Network Models
Figure 3 for Traditional and Heavy-Tailed Self Regularization in Neural Network Models
Figure 4 for Traditional and Heavy-Tailed Self Regularization in Neural Network Models
Viaarxiv icon

Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning

Add code
Bookmark button
Alert button
Oct 02, 2018
Charles H. Martin, Michael W. Mahoney

Figure 1 for Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Figure 2 for Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Figure 3 for Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Figure 4 for Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Viaarxiv icon

Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior

Add code
Bookmark button
Alert button
Oct 26, 2017
Charles H. Martin, Michael W. Mahoney

Figure 1 for Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior
Figure 2 for Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior
Figure 3 for Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior
Viaarxiv icon