Alert button
Picture for Cody Blakeney

Cody Blakeney

Alert button

Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation

Nov 01, 2022
Cody Blakeney, Jessica Zosa Forde, Jonathan Frankle, Ziliang Zong, Matthew L. Leavitt

Figure 1 for Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Figure 2 for Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Figure 3 for Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Figure 4 for Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Viaarxiv icon

Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks

Oct 08, 2021
Cody Blakeney, Gentry Atkinson, Nathaniel Huish, Yan Yan, Vangelis Metris, Ziliang Zong

Figure 1 for Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks
Figure 2 for Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks
Figure 3 for Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks
Figure 4 for Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks
Viaarxiv icon

Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation

Jun 15, 2021
Cody Blakeney, Nathaniel Huish, Yan Yan, Ziliang Zong

Figure 1 for Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation
Figure 2 for Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation
Figure 3 for Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation
Figure 4 for Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation
Viaarxiv icon

Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression

Dec 05, 2020
Cody Blakeney, Xiaomin Li, Yan Yan, Ziliang Zong

Figure 1 for Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Figure 2 for Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Figure 3 for Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Figure 4 for Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Viaarxiv icon