Alert button
Picture for Alan Black

Alan Black

Alert button

Two-Pass Low Latency End-to-End Spoken Language Understanding

Add code
Bookmark button
Alert button
Jul 14, 2022
Siddhant Arora, Siddharth Dalmia, Xuankai Chang, Brian Yan, Alan Black, Shinji Watanabe

Figure 1 for Two-Pass Low Latency End-to-End Spoken Language Understanding
Figure 2 for Two-Pass Low Latency End-to-End Spoken Language Understanding
Figure 3 for Two-Pass Low Latency End-to-End Spoken Language Understanding
Figure 4 for Two-Pass Low Latency End-to-End Spoken Language Understanding
Viaarxiv icon

DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues

Add code
Bookmark button
Alert button
Jun 02, 2021
Rishabh Joshi, Vidhisha Balachandran, Shikhar Vashishth, Alan Black, Yulia Tsvetkov

Figure 1 for DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
Figure 2 for DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
Figure 3 for DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
Figure 4 for DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
Viaarxiv icon

Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data

Add code
Bookmark button
Alert button
Feb 24, 2021
Akshat Gupta, Sai Krishna Rallabandi, Alan Black

Figure 1 for Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data
Figure 2 for Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data
Figure 3 for Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data
Figure 4 for Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data
Viaarxiv icon

Reading Between the Lines: Exploring Infilling in Visual Narratives

Add code
Bookmark button
Alert button
Oct 26, 2020
Khyathi Raghavi Chandu, Ruo-Ping Dong, Alan Black

Figure 1 for Reading Between the Lines: Exploring Infilling in Visual Narratives
Figure 2 for Reading Between the Lines: Exploring Infilling in Visual Narratives
Figure 3 for Reading Between the Lines: Exploring Infilling in Visual Narratives
Figure 4 for Reading Between the Lines: Exploring Infilling in Visual Narratives
Viaarxiv icon

Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data

Add code
Bookmark button
Alert button
Sep 25, 2019
Nishant Gurunath, Sai Krishna Rallabandi, Alan Black

Figure 1 for Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data
Figure 2 for Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data
Figure 3 for Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data
Figure 4 for Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data
Viaarxiv icon

Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop

Add code
Bookmark button
Alert button
Feb 14, 2018
Odette Scharenborg, Laurent Besacier, Alan Black, Mark Hasegawa-Johnson, Florian Metze, Graham Neubig, Sebastian Stueker, Pierre Godard, Markus Mueller, Lucas Ondel, Shruti Palaskar, Philip Arthur, Francesco Ciannella, Mingxing Du, Elin Larsen, Danny Merkx, Rachid Riad, Liming Wang, Emmanuel Dupoux

Figure 1 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Figure 2 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Figure 3 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Viaarxiv icon