Alert button
Picture for Ozlem Kalinli

Ozlem Kalinli

Alert button

TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models

Add code
Bookmark button
Alert button
Sep 05, 2023
Yuan Shangguan, Haichuan Yang, Danni Li, Chunyang Wu, Yassir Fathullah, Dilin Wang, Ayushi Dalmia, Raghuraman Krishnamoorthi, Ozlem Kalinli, Junteng Jia, Jay Mahadeokar, Xin Lei, Mike Seltzer, Vikas Chandra

Figure 1 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Figure 2 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Figure 3 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Figure 4 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Viaarxiv icon

Contextual Biasing of Named-Entities with Large Language Models

Add code
Bookmark button
Alert button
Sep 01, 2023
Chuanneng Sun, Zeeshan Ahmed, Yingyi Ma, Zhe Liu, Yutong Pang, Ozlem Kalinli

Figure 1 for Contextual Biasing of Named-Entities with Large Language Models
Figure 2 for Contextual Biasing of Named-Entities with Large Language Models
Figure 3 for Contextual Biasing of Named-Entities with Large Language Models
Figure 4 for Contextual Biasing of Named-Entities with Large Language Models
Viaarxiv icon

Modality Confidence Aware Training for Robust End-to-End Spoken Language Understanding

Add code
Bookmark button
Alert button
Jul 22, 2023
Suyoun Kim, Akshat Shrivastava, Duc Le, Ju Lin, Ozlem Kalinli, Michael L. Seltzer

Figure 1 for Modality Confidence Aware Training for Robust End-to-End Spoken Language Understanding
Figure 2 for Modality Confidence Aware Training for Robust End-to-End Spoken Language Understanding
Figure 3 for Modality Confidence Aware Training for Robust End-to-End Spoken Language Understanding
Figure 4 for Modality Confidence Aware Training for Robust End-to-End Spoken Language Understanding
Viaarxiv icon

Prompting Large Language Models with Speech Recognition Abilities

Add code
Bookmark button
Alert button
Jul 21, 2023
Yassir Fathullah, Chunyang Wu, Egor Lakomkin, Junteng Jia, Yuan Shangguan, Ke Li, Jinxi Guo, Wenhan Xiong, Jay Mahadeokar, Ozlem Kalinli, Christian Fuegen, Mike Seltzer

Figure 1 for Prompting Large Language Models with Speech Recognition Abilities
Figure 2 for Prompting Large Language Models with Speech Recognition Abilities
Figure 3 for Prompting Large Language Models with Speech Recognition Abilities
Figure 4 for Prompting Large Language Models with Speech Recognition Abilities
Viaarxiv icon

Towards Selection of Text-to-speech Data to Augment ASR Training

Add code
Bookmark button
Alert button
May 30, 2023
Shuo Liu, Leda Sarı, Chunyang Wu, Gil Keren, Yuan Shangguan, Jay Mahadeokar, Ozlem Kalinli

Figure 1 for Towards Selection of Text-to-speech Data to Augment ASR Training
Figure 2 for Towards Selection of Text-to-speech Data to Augment ASR Training
Figure 3 for Towards Selection of Text-to-speech Data to Augment ASR Training
Figure 4 for Towards Selection of Text-to-speech Data to Augment ASR Training
Viaarxiv icon

Multi-Head State Space Model for Speech Recognition

Add code
Bookmark button
Alert button
May 25, 2023
Yassir Fathullah, Chunyang Wu, Yuan Shangguan, Junteng Jia, Wenhan Xiong, Jay Mahadeokar, Chunxi Liu, Yangyang Shi, Ozlem Kalinli, Mike Seltzer, Mark J. F. Gales

Figure 1 for Multi-Head State Space Model for Speech Recognition
Figure 2 for Multi-Head State Space Model for Speech Recognition
Figure 3 for Multi-Head State Space Model for Speech Recognition
Figure 4 for Multi-Head State Space Model for Speech Recognition
Viaarxiv icon

Improving Fast-slow Encoder based Transducer with Streaming Deliberation

Add code
Bookmark button
Alert button
Dec 15, 2022
Ke Li, Jay Mahadeokar, Jinxi Guo, Yangyang Shi, Gil Keren, Ozlem Kalinli, Michael L. Seltzer, Duc Le

Figure 1 for Improving Fast-slow Encoder based Transducer with Streaming Deliberation
Figure 2 for Improving Fast-slow Encoder based Transducer with Streaming Deliberation
Figure 3 for Improving Fast-slow Encoder based Transducer with Streaming Deliberation
Figure 4 for Improving Fast-slow Encoder based Transducer with Streaming Deliberation
Viaarxiv icon

Massively Multilingual ASR on 70 Languages: Tokenization, Architecture, and Generalization Capabilities

Add code
Bookmark button
Alert button
Nov 10, 2022
Andros Tjandra, Nayan Singhal, David Zhang, Ozlem Kalinli, Abdelrahman Mohamed, Duc Le, Michael L. Seltzer

Figure 1 for Massively Multilingual ASR on 70 Languages: Tokenization, Architecture, and Generalization Capabilities
Figure 2 for Massively Multilingual ASR on 70 Languages: Tokenization, Architecture, and Generalization Capabilities
Figure 3 for Massively Multilingual ASR on 70 Languages: Tokenization, Architecture, and Generalization Capabilities
Figure 4 for Massively Multilingual ASR on 70 Languages: Tokenization, Architecture, and Generalization Capabilities
Viaarxiv icon

Factorized Blank Thresholding for Improved Runtime Efficiency of Neural Transducers

Add code
Bookmark button
Alert button
Nov 02, 2022
Duc Le, Frank Seide, Yuhao Wang, Yang Li, Kjell Schubert, Ozlem Kalinli, Michael L. Seltzer

Figure 1 for Factorized Blank Thresholding for Improved Runtime Efficiency of Neural Transducers
Figure 2 for Factorized Blank Thresholding for Improved Runtime Efficiency of Neural Transducers
Figure 3 for Factorized Blank Thresholding for Improved Runtime Efficiency of Neural Transducers
Figure 4 for Factorized Blank Thresholding for Improved Runtime Efficiency of Neural Transducers
Viaarxiv icon