Alert button

"Text": models, code, and papers
Alert button

Referring to Screen Texts with Voice Assistants

Jun 10, 2023
Shruti Bhargava, Anand Dhoot, Ing-Marie Jonsson, Hoang Long Nguyen, Alkesh Patel, Hong Yu, Vincent Renkens

Figure 1 for Referring to Screen Texts with Voice Assistants
Figure 2 for Referring to Screen Texts with Voice Assistants
Figure 3 for Referring to Screen Texts with Voice Assistants
Figure 4 for Referring to Screen Texts with Voice Assistants
Viaarxiv icon

Varianceflow: High-Quality and Controllable Text-to-Speech using Variance Information via Normalizing Flow

Feb 27, 2023
Yoonhyung Lee, Jinhyeok Yang, Kyomin Jung

Figure 1 for Varianceflow: High-Quality and Controllable Text-to-Speech using Variance Information via Normalizing Flow
Figure 2 for Varianceflow: High-Quality and Controllable Text-to-Speech using Variance Information via Normalizing Flow
Figure 3 for Varianceflow: High-Quality and Controllable Text-to-Speech using Variance Information via Normalizing Flow
Figure 4 for Varianceflow: High-Quality and Controllable Text-to-Speech using Variance Information via Normalizing Flow
Viaarxiv icon

Identifying Mentions of Pain in Mental Health Records Text: A Natural Language Processing Approach

Apr 05, 2023
Jaya Chaturvedi, Sumithra Velupillai, Robert Stewart, Angus Roberts

Figure 1 for Identifying Mentions of Pain in Mental Health Records Text: A Natural Language Processing Approach
Figure 2 for Identifying Mentions of Pain in Mental Health Records Text: A Natural Language Processing Approach
Viaarxiv icon

When Do Annotator Demographics Matter? Measuring the Influence of Annotator Demographics with the POPQUORN Dataset

Jun 12, 2023
Jiaxin Pei, David Jurgens

Figure 1 for When Do Annotator Demographics Matter? Measuring the Influence of Annotator Demographics with the POPQUORN Dataset
Figure 2 for When Do Annotator Demographics Matter? Measuring the Influence of Annotator Demographics with the POPQUORN Dataset
Figure 3 for When Do Annotator Demographics Matter? Measuring the Influence of Annotator Demographics with the POPQUORN Dataset
Figure 4 for When Do Annotator Demographics Matter? Measuring the Influence of Annotator Demographics with the POPQUORN Dataset
Viaarxiv icon

Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses

May 30, 2023
Liyan Tang, Yifan Peng, Yanshan Wang, Ying Ding, Greg Durrett, Justin F. Rousseau

Figure 1 for Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses
Figure 2 for Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses
Figure 3 for Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses
Figure 4 for Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses
Viaarxiv icon

Collaboration with Conversational AI Assistants for UX Evaluation: Questions and How to Ask them (Voice vs. Text)

Mar 07, 2023
Emily Kuang, Ehsan Jahangirzadeh Soure, Mingming Fan, Jian Zhao, Kristen Shinohara

Figure 1 for Collaboration with Conversational AI Assistants for UX Evaluation: Questions and How to Ask them (Voice vs. Text)
Figure 2 for Collaboration with Conversational AI Assistants for UX Evaluation: Questions and How to Ask them (Voice vs. Text)
Figure 3 for Collaboration with Conversational AI Assistants for UX Evaluation: Questions and How to Ask them (Voice vs. Text)
Figure 4 for Collaboration with Conversational AI Assistants for UX Evaluation: Questions and How to Ask them (Voice vs. Text)
Viaarxiv icon

AudioLDM: Text-to-Audio Generation with Latent Diffusion Models

Jan 29, 2023
Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic, Wenwu Wang, Mark D. Plumbley

Figure 1 for AudioLDM: Text-to-Audio Generation with Latent Diffusion Models
Figure 2 for AudioLDM: Text-to-Audio Generation with Latent Diffusion Models
Figure 3 for AudioLDM: Text-to-Audio Generation with Latent Diffusion Models
Figure 4 for AudioLDM: Text-to-Audio Generation with Latent Diffusion Models
Viaarxiv icon

"I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation

May 18, 2023
Anaelia Ovalle, Palash Goyal, Jwala Dhamala, Zachary Jaggers, Kai-Wei Chang, Aram Galstyan, Richard Zemel, Rahul Gupta

Figure 1 for "I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation
Figure 2 for "I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation
Figure 3 for "I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation
Figure 4 for "I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation
Viaarxiv icon

Fine-Tuning Language Models for Scientific Writing Support

Jun 21, 2023
Justin Mücke, Daria Waldow, Luise Metzger, Philipp Schauz, Marcel Hoffman, Nicolas Lell, Ansgar Scherp

Viaarxiv icon

FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention

May 21, 2023
Guangxuan Xiao, Tianwei Yin, William T. Freeman, Frédo Durand, Song Han

Figure 1 for FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
Figure 2 for FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
Figure 3 for FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
Figure 4 for FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
Viaarxiv icon