Alert button

"Text": models, code, and papers
Alert button

Evaluation of Faithfulness Using the Longest Supported Subsequence

Aug 23, 2023
Anirudh Mittal, Timo Schick, Mikel Artetxe, Jane Dwivedi-Yu

Viaarxiv icon

Take the Hint: Improving Arabic Diacritization with Partially-Diacritized Text

Jun 06, 2023
Parnia Bahar, Mattia Di Gangi, Nick Rossenbach, Mohammad Zeineldeen

Figure 1 for Take the Hint: Improving Arabic Diacritization with Partially-Diacritized Text
Figure 2 for Take the Hint: Improving Arabic Diacritization with Partially-Diacritized Text
Figure 3 for Take the Hint: Improving Arabic Diacritization with Partially-Diacritized Text
Viaarxiv icon

Understanding Shared Speech-Text Representations

Apr 27, 2023
Gary Wang, Kyle Kastner, Ankur Bapna, Zhehuai Chen, Andrew Rosenberg, Bhuvana Ramabhadran, Yu Zhang

Figure 1 for Understanding Shared Speech-Text Representations
Figure 2 for Understanding Shared Speech-Text Representations
Figure 3 for Understanding Shared Speech-Text Representations
Figure 4 for Understanding Shared Speech-Text Representations
Viaarxiv icon

DLIP: Distilling Language-Image Pre-training

Aug 24, 2023
Huafeng Kuang, Jie Wu, Xiawu Zheng, Ming Li, Xuefeng Xiao, Rui Wang, Min Zheng, Rongrong Ji

Figure 1 for DLIP: Distilling Language-Image Pre-training
Figure 2 for DLIP: Distilling Language-Image Pre-training
Figure 3 for DLIP: Distilling Language-Image Pre-training
Figure 4 for DLIP: Distilling Language-Image Pre-training
Viaarxiv icon

DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity

Aug 11, 2023
Melissa Hall, Candace Ross, Adina Williams, Nicolas Carion, Michal Drozdzal, Adriana Romero Soriano

Figure 1 for DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity
Figure 2 for DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity
Figure 3 for DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity
Figure 4 for DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity
Viaarxiv icon

Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping

Aug 11, 2023
Yasser Abdelaziz Dahou Djilali, Sanath Narayan, Haithem Boussaid, Ebtessam Almazrouei, Merouane Debbah

Figure 1 for Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping
Figure 2 for Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping
Figure 3 for Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping
Figure 4 for Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping
Viaarxiv icon

Probabilistic Adaptation of Text-to-Video Models

Jun 02, 2023
Mengjiao Yang, Yilun Du, Bo Dai, Dale Schuurmans, Joshua B. Tenenbaum, Pieter Abbeel

Figure 1 for Probabilistic Adaptation of Text-to-Video Models
Figure 2 for Probabilistic Adaptation of Text-to-Video Models
Figure 3 for Probabilistic Adaptation of Text-to-Video Models
Figure 4 for Probabilistic Adaptation of Text-to-Video Models
Viaarxiv icon

GRASP: A Rehearsal Policy for Efficient Online Continual Learning

Aug 25, 2023
Md Yousuf Harun, Jhair Gallardo, Christopher Kanan

Figure 1 for GRASP: A Rehearsal Policy for Efficient Online Continual Learning
Figure 2 for GRASP: A Rehearsal Policy for Efficient Online Continual Learning
Figure 3 for GRASP: A Rehearsal Policy for Efficient Online Continual Learning
Figure 4 for GRASP: A Rehearsal Policy for Efficient Online Continual Learning
Viaarxiv icon

MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning

Aug 25, 2023
Bang Yang, Fenglin Liu, Xian Wu, Yaowei Wang, Xu Sun, Yuexian Zou

Figure 1 for MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning
Figure 2 for MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning
Figure 3 for MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning
Figure 4 for MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning
Viaarxiv icon

Revisiting Sentence Union Generation as a Testbed for Text Consolidation

May 24, 2023
Eran Hirsch, Valentina Pyatkin, Ruben Wolhandler, Avi Caciularu, Asi Shefer, Ido Dagan

Figure 1 for Revisiting Sentence Union Generation as a Testbed for Text Consolidation
Figure 2 for Revisiting Sentence Union Generation as a Testbed for Text Consolidation
Figure 3 for Revisiting Sentence Union Generation as a Testbed for Text Consolidation
Figure 4 for Revisiting Sentence Union Generation as a Testbed for Text Consolidation
Viaarxiv icon