Alert button
Picture for Mahdi Namazifar

Mahdi Namazifar

Alert button

CESAR: Automatic Induction of Compositional Instructions for Multi-turn Dialogs

Nov 29, 2023
Taha Aksu, Devamanyu Hazarika, Shikib Mehri, Seokhwan Kim, Dilek Hakkani-Tür, Yang Liu, Mahdi Namazifar

Viaarxiv icon

Data-Efficient Alignment of Large Language Models with Human Feedback Through Natural Language

Nov 24, 2023
Di Jin, Shikib Mehri, Devamanyu Hazarika, Aishwarya Padmakumar, Sungjin Lee, Yang Liu, Mahdi Namazifar

Viaarxiv icon

"What do others think?": Task-Oriented Conversational Modeling with Subjective Knowledge

May 20, 2023
Chao Zhao, Spandana Gella, Seokhwan Kim, Di Jin, Devamanyu Hazarika, Alexandros Papangelis, Behnam Hedayatnia, Mahdi Namazifar, Yang Liu, Dilek Hakkani-Tur

Figure 1 for "What do others think?": Task-Oriented Conversational Modeling with Subjective Knowledge
Figure 2 for "What do others think?": Task-Oriented Conversational Modeling with Subjective Knowledge
Figure 3 for "What do others think?": Task-Oriented Conversational Modeling with Subjective Knowledge
Figure 4 for "What do others think?": Task-Oriented Conversational Modeling with Subjective Knowledge
Viaarxiv icon

KILM: Knowledge Injection into Encoder-Decoder Language Models

Feb 17, 2023
Yan Xu, Mahdi Namazifar, Devamanyu Hazarika, Aishwarya Padmakumar, Yang Liu, Dilek Hakkani-Tür

Figure 1 for KILM: Knowledge Injection into Encoder-Decoder Language Models
Figure 2 for KILM: Knowledge Injection into Encoder-Decoder Language Models
Figure 3 for KILM: Knowledge Injection into Encoder-Decoder Language Models
Figure 4 for KILM: Knowledge Injection into Encoder-Decoder Language Models
Viaarxiv icon

Role of Bias Terms in Dot-Product Attention

Feb 16, 2023
Mahdi Namazifar, Devamanyu Hazarika, Dilek Hakkani-Tur

Figure 1 for Role of Bias Terms in Dot-Product Attention
Figure 2 for Role of Bias Terms in Dot-Product Attention
Figure 3 for Role of Bias Terms in Dot-Product Attention
Figure 4 for Role of Bias Terms in Dot-Product Attention
Viaarxiv icon

Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information

Feb 10, 2023
Yen-Ting Lin, Alexandros Papangelis, Seokhwan Kim, Sungjin Lee, Devamanyu Hazarika, Mahdi Namazifar, Di Jin, Yang Liu, Dilek Hakkani-Tur

Figure 1 for Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information
Figure 2 for Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information
Figure 3 for Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information
Figure 4 for Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information
Viaarxiv icon

Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning

Oct 26, 2022
Yifan Chen, Devamanyu Hazarika, Mahdi Namazifar, Yang Liu, Di Jin, Dilek Hakkani-Tur

Figure 1 for Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning
Figure 2 for Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning
Figure 3 for Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning
Figure 4 for Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning
Viaarxiv icon

Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention

May 07, 2022
Yifan Chen, Devamanyu Hazarika, Mahdi Namazifar, Yang Liu, Di Jin, Dilek Hakkani-Tur

Figure 1 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Figure 2 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Figure 3 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Figure 4 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Viaarxiv icon

Zero-Shot Controlled Generation with Encoder-Decoder Transformers

Jun 15, 2021
Devamanyu Hazarika, Mahdi Namazifar, Dilek Hakkani-Tür

Figure 1 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Figure 2 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Figure 3 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Figure 4 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Viaarxiv icon