Picture for Dilin Wang

Dilin Wang

Noisy Training Improves E2E ASR for the Edge

Add code
Jul 09, 2021
Figure 1 for Noisy Training Improves E2E ASR for the Edge
Figure 2 for Noisy Training Improves E2E ASR for the Edge
Figure 3 for Noisy Training Improves E2E ASR for the Edge
Figure 4 for Noisy Training Improves E2E ASR for the Edge
Viaarxiv icon

Improve Vision Transformers Training by Suppressing Over-smoothing

Add code
Apr 26, 2021
Figure 1 for Improve Vision Transformers Training by Suppressing Over-smoothing
Figure 2 for Improve Vision Transformers Training by Suppressing Over-smoothing
Figure 3 for Improve Vision Transformers Training by Suppressing Over-smoothing
Figure 4 for Improve Vision Transformers Training by Suppressing Over-smoothing
Viaarxiv icon

AlphaNet: Improved Training of Supernet with Alpha-Divergence

Add code
Feb 16, 2021
Figure 1 for AlphaNet: Improved Training of Supernet with Alpha-Divergence
Figure 2 for AlphaNet: Improved Training of Supernet with Alpha-Divergence
Figure 3 for AlphaNet: Improved Training of Supernet with Alpha-Divergence
Figure 4 for AlphaNet: Improved Training of Supernet with Alpha-Divergence
Viaarxiv icon

AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence

Add code
Nov 23, 2020
Figure 1 for AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence
Figure 2 for AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence
Figure 3 for AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence
Figure 4 for AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence
Viaarxiv icon

KeepAugment: A Simple Information-Preserving Data Augmentation Approach

Add code
Nov 23, 2020
Figure 1 for KeepAugment: A Simple Information-Preserving Data Augmentation Approach
Figure 2 for KeepAugment: A Simple Information-Preserving Data Augmentation Approach
Figure 3 for KeepAugment: A Simple Information-Preserving Data Augmentation Approach
Figure 4 for KeepAugment: A Simple Information-Preserving Data Augmentation Approach
Viaarxiv icon

AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling

Add code
Nov 18, 2020
Figure 1 for AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
Figure 2 for AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
Figure 3 for AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
Figure 4 for AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
Viaarxiv icon

Stein Variational Gradient Descent With Matrix-Valued Kernels

Add code
Nov 05, 2019
Figure 1 for Stein Variational Gradient Descent With Matrix-Valued Kernels
Figure 2 for Stein Variational Gradient Descent With Matrix-Valued Kernels
Figure 3 for Stein Variational Gradient Descent With Matrix-Valued Kernels
Figure 4 for Stein Variational Gradient Descent With Matrix-Valued Kernels
Viaarxiv icon

Splitting Steepest Descent for Growing Neural Architectures

Add code
Nov 04, 2019
Figure 1 for Splitting Steepest Descent for Growing Neural Architectures
Figure 2 for Splitting Steepest Descent for Growing Neural Architectures
Figure 3 for Splitting Steepest Descent for Growing Neural Architectures
Figure 4 for Splitting Steepest Descent for Growing Neural Architectures
Viaarxiv icon

Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent

Add code
Oct 07, 2019
Figure 1 for Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent
Figure 2 for Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent
Figure 3 for Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent
Figure 4 for Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent
Viaarxiv icon

Improving Neural Language Modeling via Adversarial Training

Add code
Jun 10, 2019
Figure 1 for Improving Neural Language Modeling via Adversarial Training
Figure 2 for Improving Neural Language Modeling via Adversarial Training
Figure 3 for Improving Neural Language Modeling via Adversarial Training
Figure 4 for Improving Neural Language Modeling via Adversarial Training
Viaarxiv icon