Alert button
Picture for Markus Kliegl

Markus Kliegl

Alert button

Trace norm regularization and faster inference for embedded speech recognition RNNs

Feb 06, 2018
Markus Kliegl, Siddharth Goyal, Kexin Zhao, Kavya Srinet, Mohammad Shoeybi

Figure 1 for Trace norm regularization and faster inference for embedded speech recognition RNNs
Figure 2 for Trace norm regularization and faster inference for embedded speech recognition RNNs
Figure 3 for Trace norm regularization and faster inference for embedded speech recognition RNNs
Figure 4 for Trace norm regularization and faster inference for embedded speech recognition RNNs

We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition (LVCSR). For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications. Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models. For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library. Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers.

* Our optimized inference kernels are available at: https://github.com/PaddlePaddle/farm (Note: This paper was submitted to, but rejected from, ICLR 2018. We believe it may still be of value to others. Please see the discussion here: https://openreview.net/forum?id=B1tC-LT6W) 
Viaarxiv icon

Convolutional Recurrent Neural Networks for Small-Footprint Keyword Spotting

Jul 04, 2017
Sercan O. Arik, Markus Kliegl, Rewon Child, Joel Hestness, Andrew Gibiansky, Chris Fougner, Ryan Prenger, Adam Coates

Figure 1 for Convolutional Recurrent Neural Networks for Small-Footprint Keyword Spotting
Figure 2 for Convolutional Recurrent Neural Networks for Small-Footprint Keyword Spotting
Figure 3 for Convolutional Recurrent Neural Networks for Small-Footprint Keyword Spotting
Figure 4 for Convolutional Recurrent Neural Networks for Small-Footprint Keyword Spotting

Keyword spotting (KWS) constitutes a major component of human-technology interfaces. Maximizing the detection accuracy at a low false alarm (FA) rate, while minimizing the footprint size, latency and complexity are the goals for KWS. Towards achieving them, we study Convolutional Recurrent Neural Networks (CRNNs). Inspired by large-scale state-of-the-art speech recognition systems, we combine the strengths of convolutional layers and recurrent layers to exploit local structure and long-range context. We analyze the effect of architecture parameters, and propose training strategies to improve performance. With only ~230k parameters, our CRNN model yields acceptably low latency, and achieves 97.71% accuracy at 0.5 FA/hour for 5 dB signal-to-noise ratio.

* Accepted to Interspeech 2017 
Viaarxiv icon

Reducing Bias in Production Speech Models

May 11, 2017
Eric Battenberg, Rewon Child, Adam Coates, Christopher Fougner, Yashesh Gaur, Jiaji Huang, Heewoo Jun, Ajay Kannan, Markus Kliegl, Atul Kumar, Hairong Liu, Vinay Rao, Sanjeev Satheesh, David Seetapun, Anuroop Sriram, Zhenyao Zhu

Figure 1 for Reducing Bias in Production Speech Models
Figure 2 for Reducing Bias in Production Speech Models
Figure 3 for Reducing Bias in Production Speech Models
Figure 4 for Reducing Bias in Production Speech Models

Replacing hand-engineered pipelines with end-to-end deep learning systems has enabled strong results in applications like speech and object recognition. However, the causality and latency constraints of production systems put end-to-end speech models back into the underfitting regime and expose biases in the model that we show cannot be overcome by "scaling up", i.e., training bigger models on more data. In this work we systematically identify and address sources of bias, reducing error rates by up to 20% while remaining practical for deployment. We achieve this by utilizing improved neural architectures for streaming inference, solving optimization issues, and employing strategies that increase audio and label modelling versatility.

Viaarxiv icon