Alert button
Picture for Apoorv Vyas

Apoorv Vyas

Alert button

Audiobox: Unified Audio Generation with Natural Language Prompts

Add code
Bookmark button
Alert button
Dec 25, 2023
Apoorv Vyas, Bowen Shi, Matthew Le, Andros Tjandra, Yi-Chiao Wu, Baishan Guo, Jiemin Zhang, Xinyue Zhang, Robert Adkins, William Ngan, Jeff Wang, Ivan Cruz, Bapi Akula, Akinniyi Akinyemi, Brian Ellis, Rashel Moritz, Yael Yungster, Alice Rakotoarison, Liang Tan, Chris Summers, Carleigh Wood, Joshua Lane, Mary Williamson, Wei-Ning Hsu

Viaarxiv icon

Generative Pre-training for Speech with Flow Matching

Add code
Bookmark button
Alert button
Oct 25, 2023
Alexander H. Liu, Matt Le, Apoorv Vyas, Bowen Shi, Andros Tjandra, Wei-Ning Hsu

Viaarxiv icon

Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale

Add code
Bookmark button
Alert button
Jun 23, 2023
Matthew Le, Apoorv Vyas, Bowen Shi, Brian Karrer, Leda Sari, Rashel Moritz, Mary Williamson, Vimal Manohar, Yossi Adi, Jay Mahadeokar, Wei-Ning Hsu

Figure 1 for Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
Figure 2 for Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
Figure 3 for Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
Figure 4 for Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
Viaarxiv icon

Scaling Speech Technology to 1,000+ Languages

Add code
Bookmark button
Alert button
May 22, 2023
Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli

Figure 1 for Scaling Speech Technology to 1,000+ Languages
Figure 2 for Scaling Speech Technology to 1,000+ Languages
Figure 3 for Scaling Speech Technology to 1,000+ Languages
Figure 4 for Scaling Speech Technology to 1,000+ Languages
Viaarxiv icon

On-demand compute reduction with stochastic wav2vec 2.0

Add code
Bookmark button
Alert button
Apr 25, 2022
Apoorv Vyas, Wei-Ning Hsu, Michael Auli, Alexei Baevski

Figure 1 for On-demand compute reduction with stochastic wav2vec 2.0
Figure 2 for On-demand compute reduction with stochastic wav2vec 2.0
Figure 3 for On-demand compute reduction with stochastic wav2vec 2.0
Figure 4 for On-demand compute reduction with stochastic wav2vec 2.0
Viaarxiv icon

Comparing CTC and LFMMI for out-of-domain adaptation of wav2vec 2.0 acoustic model

Add code
Bookmark button
Alert button
Apr 06, 2021
Apoorv Vyas, Srikanth Madikeri, Hervé Bourlard

Figure 1 for Comparing CTC and LFMMI for out-of-domain adaptation of wav2vec 2.0 acoustic model
Figure 2 for Comparing CTC and LFMMI for out-of-domain adaptation of wav2vec 2.0 acoustic model
Figure 3 for Comparing CTC and LFMMI for out-of-domain adaptation of wav2vec 2.0 acoustic model
Viaarxiv icon

Lattice-Free MMI Adaptation Of Self-Supervised Pretrained Acoustic Models

Add code
Bookmark button
Alert button
Dec 28, 2020
Apoorv Vyas, Srikanth Madikeri, Hervé Bourlard

Figure 1 for Lattice-Free MMI Adaptation Of Self-Supervised Pretrained Acoustic Models
Figure 2 for Lattice-Free MMI Adaptation Of Self-Supervised Pretrained Acoustic Models
Figure 3 for Lattice-Free MMI Adaptation Of Self-Supervised Pretrained Acoustic Models
Viaarxiv icon

Fast Transformers with Clustered Attention

Add code
Bookmark button
Alert button
Jul 09, 2020
Apoorv Vyas, Angelos Katharopoulos, François Fleuret

Figure 1 for Fast Transformers with Clustered Attention
Figure 2 for Fast Transformers with Clustered Attention
Figure 3 for Fast Transformers with Clustered Attention
Figure 4 for Fast Transformers with Clustered Attention
Viaarxiv icon

Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention

Add code
Bookmark button
Alert button
Jun 30, 2020
Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret

Figure 1 for Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Figure 2 for Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Figure 3 for Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Figure 4 for Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Viaarxiv icon