Alert button
Picture for Rama Doddipatla

Rama Doddipatla

Alert button

Dialogue Strategy Adaptation to New Action Sets Using Multi-dimensional Modelling

Apr 14, 2022
Simon Keizer, Norbert Braunschweiler, Svetlana Stoyanchev, Rama Doddipatla

Figure 1 for Dialogue Strategy Adaptation to New Action Sets Using Multi-dimensional Modelling
Figure 2 for Dialogue Strategy Adaptation to New Action Sets Using Multi-dimensional Modelling
Figure 3 for Dialogue Strategy Adaptation to New Action Sets Using Multi-dimensional Modelling
Figure 4 for Dialogue Strategy Adaptation to New Action Sets Using Multi-dimensional Modelling
Viaarxiv icon

Transformer-based Streaming ASR with Cumulative Attention

Mar 11, 2022
Mohan Li, Shucong Zhang, Catalin Zorila, Rama Doddipatla

Figure 1 for Transformer-based Streaming ASR with Cumulative Attention
Figure 2 for Transformer-based Streaming ASR with Cumulative Attention
Figure 3 for Transformer-based Streaming ASR with Cumulative Attention
Figure 4 for Transformer-based Streaming ASR with Cumulative Attention
Viaarxiv icon

A study on cross-corpus speech emotion recognition and data augmentation

Jan 10, 2022
Norbert Braunschweiler, Rama Doddipatla, Simon Keizer, Svetlana Stoyanchev

Figure 1 for A study on cross-corpus speech emotion recognition and data augmentation
Figure 2 for A study on cross-corpus speech emotion recognition and data augmentation
Figure 3 for A study on cross-corpus speech emotion recognition and data augmentation
Figure 4 for A study on cross-corpus speech emotion recognition and data augmentation
Viaarxiv icon

Monaural source separation: From anechoic to reverberant environments

Nov 15, 2021
Tobias Cord-Landwehr, Christoph Boeddeker, Thilo von Neumann, Catalin Zorila, Rama Doddipatla, Reinhold Haeb-Umbach

Figure 1 for Monaural source separation: From anechoic to reverberant environments
Figure 2 for Monaural source separation: From anechoic to reverberant environments
Figure 3 for Monaural source separation: From anechoic to reverberant environments
Figure 4 for Monaural source separation: From anechoic to reverberant environments
Viaarxiv icon

Towards Handling Unconstrained User Preferences in Dialogue

Sep 17, 2021
Suraj Pandey, Svetlana Stoyanchev, Rama Doddipatla

Figure 1 for Towards Handling Unconstrained User Preferences in Dialogue
Figure 2 for Towards Handling Unconstrained User Preferences in Dialogue
Figure 3 for Towards Handling Unconstrained User Preferences in Dialogue
Figure 4 for Towards Handling Unconstrained User Preferences in Dialogue
Viaarxiv icon

Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation

Jun 16, 2021
Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker

Figure 1 for Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation
Figure 2 for Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation
Figure 3 for Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation
Figure 4 for Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation
Viaarxiv icon

Head-synchronous Decoding for Transformer-based Streaming ASR

Apr 26, 2021
Mohan Li, Catalin Zorila, Rama Doddipatla

Figure 1 for Head-synchronous Decoding for Transformer-based Streaming ASR
Figure 2 for Head-synchronous Decoding for Transformer-based Streaming ASR
Figure 3 for Head-synchronous Decoding for Transformer-based Streaming ASR
Figure 4 for Head-synchronous Decoding for Transformer-based Streaming ASR
Viaarxiv icon

Multiple-hypothesis CTC-based semi-supervised adaptation of end-to-end speech recognition

Mar 31, 2021
Cong-Thanh Do, Rama Doddipatla, Thomas Hain

Figure 1 for Multiple-hypothesis CTC-based semi-supervised adaptation of end-to-end speech recognition
Figure 2 for Multiple-hypothesis CTC-based semi-supervised adaptation of end-to-end speech recognition
Figure 3 for Multiple-hypothesis CTC-based semi-supervised adaptation of end-to-end speech recognition
Figure 4 for Multiple-hypothesis CTC-based semi-supervised adaptation of end-to-end speech recognition
Viaarxiv icon

Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers

Feb 09, 2021
Shucong Zhang, Cong-Thanh Do, Rama Doddipatla, Erfan Loweimi, Peter Bell, Steve Renals

Figure 1 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Figure 2 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Figure 3 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Figure 4 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Viaarxiv icon

Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism

Feb 07, 2021
Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker

Figure 1 for Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism
Figure 2 for Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism
Figure 3 for Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism
Figure 4 for Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism
Viaarxiv icon