Alert button

"Text": models, code, and papers
Alert button

On Measuring Social Biases in Prompt-Based Multi-Task Learning

May 23, 2022
Afra Feyza Akyürek, Sejin Paik, Muhammed Yusuf Kocyigit, Seda Akbiyik, Şerife Leman Runyun, Derry Wijaya

Figure 1 for On Measuring Social Biases in Prompt-Based Multi-Task Learning
Figure 2 for On Measuring Social Biases in Prompt-Based Multi-Task Learning
Figure 3 for On Measuring Social Biases in Prompt-Based Multi-Task Learning
Figure 4 for On Measuring Social Biases in Prompt-Based Multi-Task Learning
Viaarxiv icon

VICTR: Visual Information Captured Text Representation for Text-to-Image Multimodal Tasks

Oct 14, 2020
Soyeon Caren Han, Siqu Long, Siwen Luo, Kunze Wang, Josiah Poon

Figure 1 for VICTR: Visual Information Captured Text Representation for Text-to-Image Multimodal Tasks
Figure 2 for VICTR: Visual Information Captured Text Representation for Text-to-Image Multimodal Tasks
Figure 3 for VICTR: Visual Information Captured Text Representation for Text-to-Image Multimodal Tasks
Figure 4 for VICTR: Visual Information Captured Text Representation for Text-to-Image Multimodal Tasks
Viaarxiv icon

MOST: A Multi-Oriented Scene Text Detector with Localization Refinement

Apr 05, 2021
Minghang He, Minghui Liao, Zhibo Yang, Humen Zhong, Jun Tang, Wenqing Cheng, Cong Yao, Yongpan Wang, Xiang Bai

Figure 1 for MOST: A Multi-Oriented Scene Text Detector with Localization Refinement
Figure 2 for MOST: A Multi-Oriented Scene Text Detector with Localization Refinement
Figure 3 for MOST: A Multi-Oriented Scene Text Detector with Localization Refinement
Figure 4 for MOST: A Multi-Oriented Scene Text Detector with Localization Refinement
Viaarxiv icon

Preprocessing Source Code Comments for Linguistic Models

Aug 23, 2022
Sergey Matskevich, Colin Gordon

Figure 1 for Preprocessing Source Code Comments for Linguistic Models
Figure 2 for Preprocessing Source Code Comments for Linguistic Models
Figure 3 for Preprocessing Source Code Comments for Linguistic Models
Figure 4 for Preprocessing Source Code Comments for Linguistic Models
Viaarxiv icon

Cross-modal Contrastive Learning for Speech Translation

May 05, 2022
Rong Ye, Mingxuan Wang, Lei Li

Figure 1 for Cross-modal Contrastive Learning for Speech Translation
Figure 2 for Cross-modal Contrastive Learning for Speech Translation
Figure 3 for Cross-modal Contrastive Learning for Speech Translation
Figure 4 for Cross-modal Contrastive Learning for Speech Translation
Viaarxiv icon

Human-AI Collaboration Enables More Empathic Conversations in Text-based Peer-to-Peer Mental Health Support

Mar 28, 2022
Ashish Sharma, Inna W. Lin, Adam S. Miner, David C. Atkins, Tim Althoff

Figure 1 for Human-AI Collaboration Enables More Empathic Conversations in Text-based Peer-to-Peer Mental Health Support
Figure 2 for Human-AI Collaboration Enables More Empathic Conversations in Text-based Peer-to-Peer Mental Health Support
Figure 3 for Human-AI Collaboration Enables More Empathic Conversations in Text-based Peer-to-Peer Mental Health Support
Viaarxiv icon

Using Large Language Models to Simulate Multiple Humans

Aug 18, 2022
Gati Aher, Rosa I. Arriaga, Adam Tauman Kalai

Figure 1 for Using Large Language Models to Simulate Multiple Humans
Figure 2 for Using Large Language Models to Simulate Multiple Humans
Figure 3 for Using Large Language Models to Simulate Multiple Humans
Figure 4 for Using Large Language Models to Simulate Multiple Humans
Viaarxiv icon

Time-Aware Ancient Chinese Text Translation and Inference

Jul 07, 2021
Ernie Chang, Yow-Ting Shiue, Hui-Syuan Yeh, Vera Demberg

Figure 1 for Time-Aware Ancient Chinese Text Translation and Inference
Figure 2 for Time-Aware Ancient Chinese Text Translation and Inference
Figure 3 for Time-Aware Ancient Chinese Text Translation and Inference
Figure 4 for Time-Aware Ancient Chinese Text Translation and Inference
Viaarxiv icon

Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text Summarization

Aug 26, 2021
Chujie Zheng, Kunpeng Zhang, Harry Jiannan Wang, Ling Fan, Zhe Wang

Figure 1 for Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text Summarization
Figure 2 for Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text Summarization
Figure 3 for Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text Summarization
Figure 4 for Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text Summarization
Viaarxiv icon

Why is constrained neural language generation particularly challenging?

Jun 11, 2022
Cristina Garbacea, Qiaozhu Mei

Figure 1 for Why is constrained neural language generation particularly challenging?
Viaarxiv icon