Picture for Chia-Yu Chen

Chia-Yu Chen

Attention-based Learning for Sleep Apnea and Limb Movement Detection using Wi-Fi CSI Signals

Add code
Mar 26, 2023
Figure 1 for Attention-based Learning for Sleep Apnea and Limb Movement Detection using Wi-Fi CSI Signals
Figure 2 for Attention-based Learning for Sleep Apnea and Limb Movement Detection using Wi-Fi CSI Signals
Figure 3 for Attention-based Learning for Sleep Apnea and Limb Movement Detection using Wi-Fi CSI Signals
Figure 4 for Attention-based Learning for Sleep Apnea and Limb Movement Detection using Wi-Fi CSI Signals
Viaarxiv icon

Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization

Add code
Jun 16, 2022
Figure 1 for Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization
Figure 2 for Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization
Figure 3 for Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization
Viaarxiv icon

4-bit Quantization of LSTM-based Speech Recognition Models

Add code
Aug 27, 2021
Figure 1 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 2 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 3 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 4 for 4-bit Quantization of LSTM-based Speech Recognition Models
Viaarxiv icon

ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training

Add code
Apr 21, 2021
Figure 1 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 2 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 3 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 4 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Viaarxiv icon

Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks

Add code
Jan 19, 2019
Figure 1 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 2 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 3 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 4 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Viaarxiv icon

Training Deep Neural Networks with 8-bit Floating Point Numbers

Add code
Dec 19, 2018
Figure 1 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Figure 2 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Figure 3 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Figure 4 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Viaarxiv icon

AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training

Add code
Dec 07, 2017
Figure 1 for AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
Figure 2 for AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
Figure 3 for AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
Figure 4 for AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
Viaarxiv icon