Alert button
Picture for Siegfried Kunzmann

Siegfried Kunzmann

Alert button

Quantization-Aware and Tensor-Compressed Training of Transformers for Natural Language Understanding

Jun 01, 2023
Zi Yang, Samridhi Choudhary, Siegfried Kunzmann, Zheng Zhang

Figure 1 for Quantization-Aware and Tensor-Compressed Training of Transformers for Natural Language Understanding
Figure 2 for Quantization-Aware and Tensor-Compressed Training of Transformers for Natural Language Understanding
Figure 3 for Quantization-Aware and Tensor-Compressed Training of Transformers for Natural Language Understanding
Figure 4 for Quantization-Aware and Tensor-Compressed Training of Transformers for Natural Language Understanding
Viaarxiv icon

Dual-Attention Neural Transducers for Efficient Wake Word Spotting in Speech Recognition

Apr 05, 2023
Saumya Y. Sahai, Jing Liu, Thejaswi Muniyappa, Kanthashree M. Sathyendra, Anastasios Alexandridis, Grant P. Strimel, Ross McGowan, Ariya Rastrow, Feng-Ju Chang, Athanasios Mouchtaris, Siegfried Kunzmann

Figure 1 for Dual-Attention Neural Transducers for Efficient Wake Word Spotting in Speech Recognition
Figure 2 for Dual-Attention Neural Transducers for Efficient Wake Word Spotting in Speech Recognition
Figure 3 for Dual-Attention Neural Transducers for Efficient Wake Word Spotting in Speech Recognition
Figure 4 for Dual-Attention Neural Transducers for Efficient Wake Word Spotting in Speech Recognition
Viaarxiv icon

Contextual Adapters for Personalized Speech Recognition in Neural Transducers

May 26, 2022
Kanthashree Mysore Sathyendra, Thejaswi Muniyappa, Feng-Ju Chang, Jing Liu, Jinru Su, Grant P. Strimel, Athanasios Mouchtaris, Siegfried Kunzmann

Figure 1 for Contextual Adapters for Personalized Speech Recognition in Neural Transducers
Figure 2 for Contextual Adapters for Personalized Speech Recognition in Neural Transducers
Figure 3 for Contextual Adapters for Personalized Speech Recognition in Neural Transducers
Figure 4 for Contextual Adapters for Personalized Speech Recognition in Neural Transducers
Viaarxiv icon

Context-Aware Transformer Transducer for Speech Recognition

Nov 05, 2021
Feng-Ju Chang, Jing Liu, Martin Radfar, Athanasios Mouchtaris, Maurizio Omologo, Ariya Rastrow, Siegfried Kunzmann

Figure 1 for Context-Aware Transformer Transducer for Speech Recognition
Figure 2 for Context-Aware Transformer Transducer for Speech Recognition
Figure 3 for Context-Aware Transformer Transducer for Speech Recognition
Figure 4 for Context-Aware Transformer Transducer for Speech Recognition
Viaarxiv icon

FANS: Fusing ASR and NLU for on-device SLU

Oct 31, 2021
Martin Radfar, Athanasios Mouchtaris, Siegfried Kunzmann, Ariya Rastrow

Figure 1 for FANS: Fusing ASR and NLU for on-device SLU
Figure 2 for FANS: Fusing ASR and NLU for on-device SLU
Figure 3 for FANS: Fusing ASR and NLU for on-device SLU
Figure 4 for FANS: Fusing ASR and NLU for on-device SLU
Viaarxiv icon

Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models

Jun 11, 2021
Jing Liu, Rupak Vignesh Swaminathan, Sree Hari Krishnan Parthasarathi, Chunchuan Lyu, Athanasios Mouchtaris, Siegfried Kunzmann

Figure 1 for Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models
Figure 2 for Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models
Figure 3 for Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models
Figure 4 for Exploiting Large-scale Teacher-Student Training for On-device Acoustic Models
Viaarxiv icon

End-to-End Multi-Channel Transformer for Speech Recognition

Feb 08, 2021
Feng-Ju Chang, Martin Radfar, Athanasios Mouchtaris, Brian King, Siegfried Kunzmann

Figure 1 for End-to-End Multi-Channel Transformer for Speech Recognition
Figure 2 for End-to-End Multi-Channel Transformer for Speech Recognition
Figure 3 for End-to-End Multi-Channel Transformer for Speech Recognition
Figure 4 for End-to-End Multi-Channel Transformer for Speech Recognition
Viaarxiv icon

Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding

Nov 18, 2020
Bhuvan Agrawal, Markus Müller, Martin Radfar, Samridhi Choudhary, Athanasios Mouchtaris, Siegfried Kunzmann

Figure 1 for Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding
Figure 2 for Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding
Figure 3 for Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding
Figure 4 for Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding
Viaarxiv icon

End-to-End Neural Transformer Based Spoken Language Understanding

Aug 12, 2020
Martin Radfar, Athanasios Mouchtaris, Siegfried Kunzmann

Figure 1 for End-to-End Neural Transformer Based Spoken Language Understanding
Figure 2 for End-to-End Neural Transformer Based Spoken Language Understanding
Figure 3 for End-to-End Neural Transformer Based Spoken Language Understanding
Figure 4 for End-to-End Neural Transformer Based Spoken Language Understanding
Viaarxiv icon