Alert button
Picture for Jade Copet

Jade Copet

Alert button

Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimation

Add code
Bookmark button
Alert button
Oct 27, 2022
Marvin Lavechin, Marianne Métais, Hadrien Titeux, Alodie Boissonnet, Jade Copet, Morgane Rivière, Elika Bergelson, Alejandrina Cristia, Emmanuel Dupoux, Hervé Bredin

Figure 1 for Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimation
Figure 2 for Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimation
Figure 3 for Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimation
Figure 4 for Brouhaha: multi-task training for voice activity detection, speech-to-noise ratio, and C50 room acoustics estimation
Viaarxiv icon

High Fidelity Neural Audio Compression

Add code
Bookmark button
Alert button
Oct 24, 2022
Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi

Figure 1 for High Fidelity Neural Audio Compression
Figure 2 for High Fidelity Neural Audio Compression
Figure 3 for High Fidelity Neural Audio Compression
Figure 4 for High Fidelity Neural Audio Compression
Viaarxiv icon

On The Robustness of Self-Supervised Representations for Spoken Language Modeling

Add code
Bookmark button
Alert button
Sep 30, 2022
Itai Gat, Felix Kreuk, Ann Lee, Jade Copet, Gabriel Synnaeve, Emmanuel Dupoux, Yossi Adi

Figure 1 for On The Robustness of Self-Supervised Representations for Spoken Language Modeling
Figure 2 for On The Robustness of Self-Supervised Representations for Spoken Language Modeling
Figure 3 for On The Robustness of Self-Supervised Representations for Spoken Language Modeling
Figure 4 for On The Robustness of Self-Supervised Representations for Spoken Language Modeling
Viaarxiv icon

AudioGen: Textually Guided Audio Generation

Add code
Bookmark button
Alert button
Sep 30, 2022
Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre Défossez, Jade Copet, Devi Parikh, Yaniv Taigman, Yossi Adi

Figure 1 for AudioGen: Textually Guided Audio Generation
Figure 2 for AudioGen: Textually Guided Audio Generation
Figure 3 for AudioGen: Textually Guided Audio Generation
Figure 4 for AudioGen: Textually Guided Audio Generation
Viaarxiv icon

Generative Spoken Dialogue Language Modeling

Add code
Bookmark button
Alert button
Mar 30, 2022
Tu Anh Nguyen, Eugene Kharitonov, Jade Copet, Yossi Adi, Wei-Ning Hsu, Ali Elkahky, Paden Tomasello, Robin Algayres, Benoit Sagot, Abdelrahman Mohamed, Emmanuel Dupoux

Figure 1 for Generative Spoken Dialogue Language Modeling
Figure 2 for Generative Spoken Dialogue Language Modeling
Figure 3 for Generative Spoken Dialogue Language Modeling
Figure 4 for Generative Spoken Dialogue Language Modeling
Viaarxiv icon

textless-lib: a Library for Textless Spoken Language Processing

Add code
Bookmark button
Alert button
Feb 15, 2022
Eugene Kharitonov, Jade Copet, Kushal Lakhotia, Tu Anh Nguyen, Paden Tomasello, Ann Lee, Ali Elkahky, Wei-Ning Hsu, Abdelrahman Mohamed, Emmanuel Dupoux, Yossi Adi

Figure 1 for textless-lib: a Library for Textless Spoken Language Processing
Figure 2 for textless-lib: a Library for Textless Spoken Language Processing
Figure 3 for textless-lib: a Library for Textless Spoken Language Processing
Figure 4 for textless-lib: a Library for Textless Spoken Language Processing
Viaarxiv icon

Textless Speech Emotion Conversion using Decomposed and Discrete Representations

Add code
Bookmark button
Alert button
Nov 14, 2021
Felix Kreuk, Adam Polyak, Jade Copet, Eugene Kharitonov, Tu-Anh Nguyen, Morgane Rivière, Wei-Ning Hsu, Abdelrahman Mohamed, Emmanuel Dupoux, Yossi Adi

Figure 1 for Textless Speech Emotion Conversion using Decomposed and Discrete Representations
Figure 2 for Textless Speech Emotion Conversion using Decomposed and Discrete Representations
Figure 3 for Textless Speech Emotion Conversion using Decomposed and Discrete Representations
Figure 4 for Textless Speech Emotion Conversion using Decomposed and Discrete Representations
Viaarxiv icon

ASR4REAL: An extended benchmark for speech models

Add code
Bookmark button
Alert button
Oct 16, 2021
Morgane Riviere, Jade Copet, Gabriel Synnaeve

Figure 1 for ASR4REAL: An extended benchmark for speech models
Figure 2 for ASR4REAL: An extended benchmark for speech models
Figure 3 for ASR4REAL: An extended benchmark for speech models
Figure 4 for ASR4REAL: An extended benchmark for speech models
Viaarxiv icon

Text-Free Prosody-Aware Generative Spoken Language Modeling

Add code
Bookmark button
Alert button
Sep 07, 2021
Eugene Kharitonov, Ann Lee, Adam Polyak, Yossi Adi, Jade Copet, Kushal Lakhotia, Tu-Anh Nguyen, Morgane Rivière, Abdelrahman Mohamed, Emmanuel Dupoux, Wei-Ning Hsu

Figure 1 for Text-Free Prosody-Aware Generative Spoken Language Modeling
Figure 2 for Text-Free Prosody-Aware Generative Spoken Language Modeling
Figure 3 for Text-Free Prosody-Aware Generative Spoken Language Modeling
Figure 4 for Text-Free Prosody-Aware Generative Spoken Language Modeling
Viaarxiv icon

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations

Add code
Bookmark button
Alert button
Apr 02, 2021
Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, Emmanuel Dupoux

Figure 1 for Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
Figure 2 for Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
Figure 3 for Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
Figure 4 for Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
Viaarxiv icon