Picture for Young-Bum Kim

Young-Bum Kim

Deciding Whether to Ask Clarifying Questions in Large-Scale Spoken Language Understanding

Add code
Sep 25, 2021
Figure 1 for Deciding Whether to Ask Clarifying Questions in Large-Scale Spoken Language Understanding
Figure 2 for Deciding Whether to Ask Clarifying Questions in Large-Scale Spoken Language Understanding
Figure 3 for Deciding Whether to Ask Clarifying Questions in Large-Scale Spoken Language Understanding
Figure 4 for Deciding Whether to Ask Clarifying Questions in Large-Scale Spoken Language Understanding
Viaarxiv icon

AUGNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation

Add code
Jun 10, 2021
Figure 1 for AUGNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation
Figure 2 for AUGNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation
Figure 3 for AUGNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation
Figure 4 for AUGNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation
Viaarxiv icon

Learning Slice-Aware Representations with Mixture of Attentions

Add code
Jun 04, 2021
Figure 1 for Learning Slice-Aware Representations with Mixture of Attentions
Figure 2 for Learning Slice-Aware Representations with Mixture of Attentions
Figure 3 for Learning Slice-Aware Representations with Mixture of Attentions
Figure 4 for Learning Slice-Aware Representations with Mixture of Attentions
Viaarxiv icon

Handling Long-Tail Queries with Slice-Aware Conversational Systems

Add code
Apr 26, 2021
Figure 1 for Handling Long-Tail Queries with Slice-Aware Conversational Systems
Figure 2 for Handling Long-Tail Queries with Slice-Aware Conversational Systems
Figure 3 for Handling Long-Tail Queries with Slice-Aware Conversational Systems
Figure 4 for Handling Long-Tail Queries with Slice-Aware Conversational Systems
Viaarxiv icon

Neural model robustness for skill routing in large-scale conversational AI systems: A design choice exploration

Add code
Mar 04, 2021
Figure 1 for Neural model robustness for skill routing in large-scale conversational AI systems: A design choice exploration
Figure 2 for Neural model robustness for skill routing in large-scale conversational AI systems: A design choice exploration
Figure 3 for Neural model robustness for skill routing in large-scale conversational AI systems: A design choice exploration
Figure 4 for Neural model robustness for skill routing in large-scale conversational AI systems: A design choice exploration
Viaarxiv icon

A Data-driven Approach to Estimate User Satisfaction in Multi-turn Dialogues

Add code
Mar 01, 2021
Figure 1 for A Data-driven Approach to Estimate User Satisfaction in Multi-turn Dialogues
Figure 2 for A Data-driven Approach to Estimate User Satisfaction in Multi-turn Dialogues
Figure 3 for A Data-driven Approach to Estimate User Satisfaction in Multi-turn Dialogues
Figure 4 for A Data-driven Approach to Estimate User Satisfaction in Multi-turn Dialogues
Viaarxiv icon

A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems

Add code
Oct 23, 2020
Figure 1 for A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems
Figure 2 for A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems
Figure 3 for A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems
Figure 4 for A Scalable Framework for Learning From Implicit User Feedback to Improve Natural Language Understanding in Large-Scale Conversational AI Systems
Viaarxiv icon

Self-Supervised Contrastive Learning for Efficient User Satisfaction Prediction in Conversational Agents

Add code
Oct 21, 2020
Figure 1 for Self-Supervised Contrastive Learning for Efficient User Satisfaction Prediction in Conversational Agents
Figure 2 for Self-Supervised Contrastive Learning for Efficient User Satisfaction Prediction in Conversational Agents
Figure 3 for Self-Supervised Contrastive Learning for Efficient User Satisfaction Prediction in Conversational Agents
Figure 4 for Self-Supervised Contrastive Learning for Efficient User Satisfaction Prediction in Conversational Agents
Viaarxiv icon

Large-scale Hybrid Approach for Predicting User Satisfaction with Conversational Agents

Add code
May 29, 2020
Figure 1 for Large-scale Hybrid Approach for Predicting User Satisfaction with Conversational Agents
Figure 2 for Large-scale Hybrid Approach for Predicting User Satisfaction with Conversational Agents
Figure 3 for Large-scale Hybrid Approach for Predicting User Satisfaction with Conversational Agents
Figure 4 for Large-scale Hybrid Approach for Predicting User Satisfaction with Conversational Agents
Viaarxiv icon

Pseudo Labeling and Negative Feedback Learning for Large-scale Multi-label Domain Classification

Add code
Mar 08, 2020
Figure 1 for Pseudo Labeling and Negative Feedback Learning for Large-scale Multi-label Domain Classification
Figure 2 for Pseudo Labeling and Negative Feedback Learning for Large-scale Multi-label Domain Classification
Figure 3 for Pseudo Labeling and Negative Feedback Learning for Large-scale Multi-label Domain Classification
Figure 4 for Pseudo Labeling and Negative Feedback Learning for Large-scale Multi-label Domain Classification
Viaarxiv icon