Picture for Gokhan Tur

Gokhan Tur

Bilkent University, Ankara, Turkey

Confidence Estimation for LLM-Based Dialogue State Tracking

Add code
Sep 15, 2024
Viaarxiv icon

Dialog Flow Induction for Constrainable LLM-Based Chatbots

Add code
Aug 03, 2024
Viaarxiv icon

AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model

Add code
Aug 03, 2022
Figure 1 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 2 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 3 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 4 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Viaarxiv icon

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

Add code
Jun 15, 2022
Figure 1 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 2 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 3 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 4 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Viaarxiv icon

MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages

Add code
Apr 18, 2022
Figure 1 for MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
Figure 2 for MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
Figure 3 for MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
Figure 4 for MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
Viaarxiv icon

TEACh: Task-driven Embodied Agents that Chat

Add code
Oct 15, 2021
Figure 1 for TEACh: Task-driven Embodied Agents that Chat
Figure 2 for TEACh: Task-driven Embodied Agents that Chat
Figure 3 for TEACh: Task-driven Embodied Agents that Chat
Figure 4 for TEACh: Task-driven Embodied Agents that Chat
Viaarxiv icon

Generative Conversational Networks

Add code
Jun 15, 2021
Figure 1 for Generative Conversational Networks
Figure 2 for Generative Conversational Networks
Figure 3 for Generative Conversational Networks
Figure 4 for Generative Conversational Networks
Viaarxiv icon

Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation

Add code
May 24, 2021
Figure 1 for Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation
Figure 2 for Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation
Figure 3 for Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation
Figure 4 for Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation
Viaarxiv icon

Correcting Automated and Manual Speech Transcription Errors using Warped Language Models

Add code
Mar 26, 2021
Figure 1 for Correcting Automated and Manual Speech Transcription Errors using Warped Language Models
Figure 2 for Correcting Automated and Manual Speech Transcription Errors using Warped Language Models
Figure 3 for Correcting Automated and Manual Speech Transcription Errors using Warped Language Models
Figure 4 for Correcting Automated and Manual Speech Transcription Errors using Warped Language Models
Viaarxiv icon

Are We There Yet? Learning to Localize in Embodied Instruction Following

Add code
Jan 09, 2021
Figure 1 for Are We There Yet? Learning to Localize in Embodied Instruction Following
Figure 2 for Are We There Yet? Learning to Localize in Embodied Instruction Following
Figure 3 for Are We There Yet? Learning to Localize in Embodied Instruction Following
Figure 4 for Are We There Yet? Learning to Localize in Embodied Instruction Following
Viaarxiv icon