Alert button
Picture for Tal Remez

Tal Remez

Alert button

The Larger the Better? Improved LLM Code-Generation via Budget Reallocation

Add code
Bookmark button
Alert button
Mar 31, 2024
Michael Hassid, Tal Remez, Jonas Gehring, Roy Schwartz, Yossi Adi

Viaarxiv icon

Masked Audio Generation using a Single Non-Autoregressive Transformer

Add code
Bookmark button
Alert button
Jan 09, 2024
Alon Ziv, Itai Gat, Gael Le Lan, Tal Remez, Felix Kreuk, Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi

Viaarxiv icon

Code Llama: Open Foundation Models for Code

Add code
Bookmark button
Alert button
Aug 25, 2023
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve

Figure 1 for Code Llama: Open Foundation Models for Code
Figure 2 for Code Llama: Open Foundation Models for Code
Figure 3 for Code Llama: Open Foundation Models for Code
Figure 4 for Code Llama: Open Foundation Models for Code
Viaarxiv icon

EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech Resynthesis

Add code
Bookmark button
Alert button
Aug 10, 2023
Tu Anh Nguyen, Wei-Ning Hsu, Antony D'Avirro, Bowen Shi, Itai Gat, Maryam Fazel-Zarani, Tal Remez, Jade Copet, Gabriel Synnaeve, Michael Hassid, Felix Kreuk, Yossi Adi, Emmanuel Dupoux

Figure 1 for EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech Resynthesis
Figure 2 for EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech Resynthesis
Figure 3 for EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech Resynthesis
Figure 4 for EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech Resynthesis
Viaarxiv icon

Simple and Controllable Music Generation

Add code
Bookmark button
Alert button
Jun 08, 2023
Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez

Figure 1 for Simple and Controllable Music Generation
Figure 2 for Simple and Controllable Music Generation
Figure 3 for Simple and Controllable Music Generation
Figure 4 for Simple and Controllable Music Generation
Viaarxiv icon

Textually Pretrained Speech Language Models

Add code
Bookmark button
Alert button
May 22, 2023
Michael Hassid, Tal Remez, Tu Anh Nguyen, Itai Gat, Alexis Conneau, Felix Kreuk, Jade Copet, Alexandre Defossez, Gabriel Synnaeve, Emmanuel Dupoux, Roy Schwartz, Yossi Adi

Figure 1 for Textually Pretrained Speech Language Models
Figure 2 for Textually Pretrained Speech Language Models
Figure 3 for Textually Pretrained Speech Language Models
Figure 4 for Textually Pretrained Speech Language Models
Viaarxiv icon

ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement

Add code
Bookmark button
Alert button
Dec 21, 2022
Wei-Ning Hsu, Tal Remez, Bowen Shi, Jacob Donley, Yossi Adi

Figure 1 for ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement
Figure 2 for ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement
Figure 3 for ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement
Figure 4 for ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement
Viaarxiv icon

AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation

Add code
Bookmark button
Alert button
Jul 20, 2022
Efthymios Tzinis, Scott Wisdom, Tal Remez, John R. Hershey

Figure 1 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Figure 2 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Figure 3 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Figure 4 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Viaarxiv icon

More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech

Add code
Bookmark button
Alert button
Nov 19, 2021
Michael Hassid, Michelle Tadmor Ramanovich, Brendan Shillingford, Miaosen Wang, Ye Jia, Tal Remez

Figure 1 for More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech
Figure 2 for More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech
Figure 3 for More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech
Figure 4 for More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech
Viaarxiv icon