Alert button
Picture for Marco Tagliasacchi

Marco Tagliasacchi

Alert button

TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition

Aug 21, 2023
Hakan Erdogan, Scott Wisdom, Xuankai Chang, Zalán Borsos, Marco Tagliasacchi, Neil Zeghidour, John R. Hershey

Figure 1 for TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition
Figure 2 for TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition
Figure 3 for TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition
Viaarxiv icon

AudioPaLM: A Large Language Model That Can Speak and Listen

Jun 22, 2023
Paul K. Rubenstein, Chulayuth Asawaroengchai, Duc Dung Nguyen, Ankur Bapna, Zalán Borsos, Félix de Chaumont Quitry, Peter Chen, Dalia El Badawy, Wei Han, Eugene Kharitonov, Hannah Muckenhirn, Dirk Padfield, James Qin, Danny Rozenberg, Tara Sainath, Johan Schalkwyk, Matt Sharifi, Michelle Tadmor Ramanovich, Marco Tagliasacchi, Alexandru Tudor, Mihajlo Velimirović, Damien Vincent, Jiahui Yu, Yongqiang Wang, Vicky Zayats, Neil Zeghidour, Yu Zhang, Zhishuai Zhang, Lukas Zilka, Christian Frank

Figure 1 for AudioPaLM: A Large Language Model That Can Speak and Listen
Figure 2 for AudioPaLM: A Large Language Model That Can Speak and Listen
Figure 3 for AudioPaLM: A Large Language Model That Can Speak and Listen
Figure 4 for AudioPaLM: A Large Language Model That Can Speak and Listen
Viaarxiv icon

SoundStorm: Efficient Parallel Audio Generation

May 16, 2023
Zalán Borsos, Matt Sharifi, Damien Vincent, Eugene Kharitonov, Neil Zeghidour, Marco Tagliasacchi

Figure 1 for SoundStorm: Efficient Parallel Audio Generation
Figure 2 for SoundStorm: Efficient Parallel Audio Generation
Figure 3 for SoundStorm: Efficient Parallel Audio Generation
Figure 4 for SoundStorm: Efficient Parallel Audio Generation
Viaarxiv icon

LMCodec: A Low Bitrate Speech Codec With Causal Transformer Models

Mar 23, 2023
Teerapat Jenrungrot, Michael Chinen, W. Bastiaan Kleijn, Jan Skoglund, Zalán Borsos, Neil Zeghidour, Marco Tagliasacchi

Figure 1 for LMCodec: A Low Bitrate Speech Codec With Causal Transformer Models
Figure 2 for LMCodec: A Low Bitrate Speech Codec With Causal Transformer Models
Figure 3 for LMCodec: A Low Bitrate Speech Codec With Causal Transformer Models
Figure 4 for LMCodec: A Low Bitrate Speech Codec With Causal Transformer Models
Viaarxiv icon

Speak, Read and Prompt: High-Fidelity Text-to-Speech with Minimal Supervision

Feb 07, 2023
Eugene Kharitonov, Damien Vincent, Zalán Borsos, Raphaël Marinier, Sertan Girgin, Olivier Pietquin, Matt Sharifi, Marco Tagliasacchi, Neil Zeghidour

Figure 1 for Speak, Read and Prompt: High-Fidelity Text-to-Speech with Minimal Supervision
Figure 2 for Speak, Read and Prompt: High-Fidelity Text-to-Speech with Minimal Supervision
Figure 3 for Speak, Read and Prompt: High-Fidelity Text-to-Speech with Minimal Supervision
Figure 4 for Speak, Read and Prompt: High-Fidelity Text-to-Speech with Minimal Supervision
Viaarxiv icon

MusicLM: Generating Music From Text

Jan 26, 2023
Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matt Sharifi, Neil Zeghidour, Christian Frank

Figure 1 for MusicLM: Generating Music From Text
Figure 2 for MusicLM: Generating Music From Text
Figure 3 for MusicLM: Generating Music From Text
Figure 4 for MusicLM: Generating Music From Text
Viaarxiv icon

AudioLM: a Language Modeling Approach to Audio Generation

Sep 07, 2022
Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Olivier Teboul, David Grangier, Marco Tagliasacchi, Neil Zeghidour

Figure 1 for AudioLM: a Language Modeling Approach to Audio Generation
Figure 2 for AudioLM: a Language Modeling Approach to Audio Generation
Figure 3 for AudioLM: a Language Modeling Approach to Audio Generation
Figure 4 for AudioLM: a Language Modeling Approach to Audio Generation
Viaarxiv icon

Text-Driven Separation of Arbitrary Sounds

Apr 12, 2022
Kevin Kilgour, Beat Gfeller, Qingqing Huang, Aren Jansen, Scott Wisdom, Marco Tagliasacchi

Figure 1 for Text-Driven Separation of Arbitrary Sounds
Figure 2 for Text-Driven Separation of Arbitrary Sounds
Figure 3 for Text-Driven Separation of Arbitrary Sounds
Figure 4 for Text-Driven Separation of Arbitrary Sounds
Viaarxiv icon

CycleGAN-Based Unpaired Speech Dereverberation

Mar 29, 2022
Hannah Muckenhirn, Aleksandr Safin, Hakan Erdogan, Felix de Chaumont Quitry, Marco Tagliasacchi, Scott Wisdom, John R. Hershey

Figure 1 for CycleGAN-Based Unpaired Speech Dereverberation
Figure 2 for CycleGAN-Based Unpaired Speech Dereverberation
Figure 3 for CycleGAN-Based Unpaired Speech Dereverberation
Viaarxiv icon

Disentangling speech from surroundings in a neural audio codec

Mar 29, 2022
Ahmed Omran, Neil Zeghidour, Zalán Borsos, Félix de Chaumont Quitry, Malcolm Slaney, Marco Tagliasacchi

Figure 1 for Disentangling speech from surroundings in a neural audio codec
Figure 2 for Disentangling speech from surroundings in a neural audio codec
Figure 3 for Disentangling speech from surroundings in a neural audio codec
Viaarxiv icon