Picture for Bin Yu

Bin Yu

Towards Robust Waveform-Based Acoustic Models

Add code
Oct 16, 2021
Figure 1 for Towards Robust Waveform-Based Acoustic Models
Figure 2 for Towards Robust Waveform-Based Acoustic Models
Figure 3 for Towards Robust Waveform-Based Acoustic Models
Viaarxiv icon

Interpreting and improving deep-learning models with reality checks

Add code
Aug 19, 2021
Figure 1 for Interpreting and improving deep-learning models with reality checks
Figure 2 for Interpreting and improving deep-learning models with reality checks
Figure 3 for Interpreting and improving deep-learning models with reality checks
Figure 4 for Interpreting and improving deep-learning models with reality checks
Viaarxiv icon

Adaptive wavelet distillation from neural networks through interpretations

Add code
Jul 19, 2021
Figure 1 for Adaptive wavelet distillation from neural networks through interpretations
Figure 2 for Adaptive wavelet distillation from neural networks through interpretations
Figure 3 for Adaptive wavelet distillation from neural networks through interpretations
Figure 4 for Adaptive wavelet distillation from neural networks through interpretations
Viaarxiv icon

Enriched Annotations for Tumor Attribute Classification from Pathology Reports with Limited Labeled Data

Add code
Dec 15, 2020
Figure 1 for Enriched Annotations for Tumor Attribute Classification from Pathology Reports with Limited Labeled Data
Figure 2 for Enriched Annotations for Tumor Attribute Classification from Pathology Reports with Limited Labeled Data
Figure 3 for Enriched Annotations for Tumor Attribute Classification from Pathology Reports with Limited Labeled Data
Figure 4 for Enriched Annotations for Tumor Attribute Classification from Pathology Reports with Limited Labeled Data
Viaarxiv icon

Stable discovery of interpretable subgroups via calibration in causal studies

Add code
Sep 29, 2020
Figure 1 for Stable discovery of interpretable subgroups via calibration in causal studies
Figure 2 for Stable discovery of interpretable subgroups via calibration in causal studies
Figure 3 for Stable discovery of interpretable subgroups via calibration in causal studies
Figure 4 for Stable discovery of interpretable subgroups via calibration in causal studies
Viaarxiv icon

Revisiting complexity and the bias-variance tradeoff

Add code
Jun 17, 2020
Figure 1 for Revisiting complexity and the bias-variance tradeoff
Figure 2 for Revisiting complexity and the bias-variance tradeoff
Figure 3 for Revisiting complexity and the bias-variance tradeoff
Figure 4 for Revisiting complexity and the bias-variance tradeoff
Viaarxiv icon

Instability, Computational Efficiency and Statistical Accuracy

Add code
May 22, 2020
Figure 1 for Instability, Computational Efficiency and Statistical Accuracy
Figure 2 for Instability, Computational Efficiency and Statistical Accuracy
Figure 3 for Instability, Computational Efficiency and Statistical Accuracy
Figure 4 for Instability, Computational Efficiency and Statistical Accuracy
Viaarxiv icon

Curating a COVID-19 data repository and forecasting county-level death counts in the United States

Add code
May 16, 2020
Figure 1 for Curating a COVID-19 data repository and forecasting county-level death counts in the United States
Figure 2 for Curating a COVID-19 data repository and forecasting county-level death counts in the United States
Figure 3 for Curating a COVID-19 data repository and forecasting county-level death counts in the United States
Figure 4 for Curating a COVID-19 data repository and forecasting county-level death counts in the United States
Viaarxiv icon

Transformation Importance with Applications to Cosmology

Add code
Mar 04, 2020
Figure 1 for Transformation Importance with Applications to Cosmology
Figure 2 for Transformation Importance with Applications to Cosmology
Figure 3 for Transformation Importance with Applications to Cosmology
Figure 4 for Transformation Importance with Applications to Cosmology
Viaarxiv icon

Interpretations are useful: penalizing explanations to align neural networks with prior knowledge

Add code
Oct 01, 2019
Figure 1 for Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Figure 2 for Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Figure 3 for Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Figure 4 for Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Viaarxiv icon