Alert button
Picture for Michelle Tadmor Ramanovich

Michelle Tadmor Ramanovich

Alert button

AudioPaLM: A Large Language Model That Can Speak and Listen

Jun 22, 2023
Paul K. Rubenstein, Chulayuth Asawaroengchai, Duc Dung Nguyen, Ankur Bapna, Zalán Borsos, Félix de Chaumont Quitry, Peter Chen, Dalia El Badawy, Wei Han, Eugene Kharitonov, Hannah Muckenhirn, Dirk Padfield, James Qin, Danny Rozenberg, Tara Sainath, Johan Schalkwyk, Matt Sharifi, Michelle Tadmor Ramanovich, Marco Tagliasacchi, Alexandru Tudor, Mihajlo Velimirović, Damien Vincent, Jiahui Yu, Yongqiang Wang, Vicky Zayats, Neil Zeghidour, Yu Zhang, Zhishuai Zhang, Lukas Zilka, Christian Frank

Figure 1 for AudioPaLM: A Large Language Model That Can Speak and Listen
Figure 2 for AudioPaLM: A Large Language Model That Can Speak and Listen
Figure 3 for AudioPaLM: A Large Language Model That Can Speak and Listen
Figure 4 for AudioPaLM: A Large Language Model That Can Speak and Listen
Viaarxiv icon

Translatotron 3: Speech to Speech Translation with Monolingual Data

Jun 01, 2023
Eliya Nachmani, Alon Levkovitch, Yifan Ding, Chulayuth Asawaroengchai, Heiga Zen, Michelle Tadmor Ramanovich

Figure 1 for Translatotron 3: Speech to Speech Translation with Monolingual Data
Figure 2 for Translatotron 3: Speech to Speech Translation with Monolingual Data
Figure 3 for Translatotron 3: Speech to Speech Translation with Monolingual Data
Figure 4 for Translatotron 3: Speech to Speech Translation with Monolingual Data
Viaarxiv icon

LMs with a Voice: Spoken Language Modeling beyond Speech Tokens

May 24, 2023
Eliya Nachmani, Alon Levkovitch, Julian Salazar, Chulayutsh Asawaroengchai, Soroosh Mariooryad, RJ Skerry-Ryan, Michelle Tadmor Ramanovich

Figure 1 for LMs with a Voice: Spoken Language Modeling beyond Speech Tokens
Figure 2 for LMs with a Voice: Spoken Language Modeling beyond Speech Tokens
Figure 3 for LMs with a Voice: Spoken Language Modeling beyond Speech Tokens
Figure 4 for LMs with a Voice: Spoken Language Modeling beyond Speech Tokens
Viaarxiv icon

CVSS Corpus and Massively Multilingual Speech-to-Speech Translation

Jan 16, 2022
Ye Jia, Michelle Tadmor Ramanovich, Quan Wang, Heiga Zen

Figure 1 for CVSS Corpus and Massively Multilingual Speech-to-Speech Translation
Figure 2 for CVSS Corpus and Massively Multilingual Speech-to-Speech Translation
Figure 3 for CVSS Corpus and Massively Multilingual Speech-to-Speech Translation
Figure 4 for CVSS Corpus and Massively Multilingual Speech-to-Speech Translation
Viaarxiv icon

More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech

Nov 19, 2021
Michael Hassid, Michelle Tadmor Ramanovich, Brendan Shillingford, Miaosen Wang, Ye Jia, Tal Remez

Figure 1 for More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech
Figure 2 for More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech
Figure 3 for More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech
Figure 4 for More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech
Viaarxiv icon

Translatotron 2: Robust direct speech-to-speech translation

Jul 29, 2021
Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, Roi Pomerantz

Figure 1 for Translatotron 2: Robust direct speech-to-speech translation
Figure 2 for Translatotron 2: Robust direct speech-to-speech translation
Figure 3 for Translatotron 2: Robust direct speech-to-speech translation
Figure 4 for Translatotron 2: Robust direct speech-to-speech translation
Viaarxiv icon