Alert button
Picture for Devamanyu Hazarika

Devamanyu Hazarika

Alert button

Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention

Add code
Bookmark button
Alert button
May 07, 2022
Yifan Chen, Devamanyu Hazarika, Mahdi Namazifar, Yang Liu, Di Jin, Dilek Hakkani-Tur

Figure 1 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Figure 2 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Figure 3 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Figure 4 for Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attention
Viaarxiv icon

Exemplars-guided Empathetic Response Generation Controlled by the Elements of Human Communication

Add code
Bookmark button
Alert button
Jun 22, 2021
Navonil Majumder, Deepanway Ghosal, Devamanyu Hazarika, Alexander Gelbukh, Rada Mihalcea, Soujanya Poria

Figure 1 for Exemplars-guided Empathetic Response Generation Controlled by the Elements of Human Communication
Figure 2 for Exemplars-guided Empathetic Response Generation Controlled by the Elements of Human Communication
Figure 3 for Exemplars-guided Empathetic Response Generation Controlled by the Elements of Human Communication
Figure 4 for Exemplars-guided Empathetic Response Generation Controlled by the Elements of Human Communication
Viaarxiv icon

Zero-Shot Controlled Generation with Encoder-Decoder Transformers

Add code
Bookmark button
Alert button
Jun 15, 2021
Devamanyu Hazarika, Mahdi Namazifar, Dilek Hakkani-Tür

Figure 1 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Figure 2 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Figure 3 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Figure 4 for Zero-Shot Controlled Generation with Encoder-Decoder Transformers
Viaarxiv icon

Recognizing Emotion Cause in Conversations

Add code
Bookmark button
Alert button
Dec 24, 2020
Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Deepanway Ghosal, Rishabh Bhardwaj, Samson Yu Bai Jian, Romila Ghosh, Niyati Chhaya, Alexander Gelbukh, Rada Mihalcea

Figure 1 for Recognizing Emotion Cause in Conversations
Figure 2 for Recognizing Emotion Cause in Conversations
Figure 3 for Recognizing Emotion Cause in Conversations
Figure 4 for Recognizing Emotion Cause in Conversations
Viaarxiv icon

Domain Divergences: a Survey and Empirical Analysis

Add code
Bookmark button
Alert button
Oct 23, 2020
Abhinav Ramesh Kashyap, Devamanyu Hazarika, Min-Yen Kan, Roger Zimmermann

Figure 1 for Domain Divergences: a Survey and Empirical Analysis
Figure 2 for Domain Divergences: a Survey and Empirical Analysis
Figure 3 for Domain Divergences: a Survey and Empirical Analysis
Figure 4 for Domain Divergences: a Survey and Empirical Analysis
Viaarxiv icon

Emerging Trends of Multimodal Research in Vision and Language

Add code
Bookmark button
Alert button
Oct 19, 2020
Shagun Uppal, Sarthak Bhagat, Devamanyu Hazarika, Navonil Majumdar, Soujanya Poria, Roger Zimmermann, Amir Zadeh

Figure 1 for Emerging Trends of Multimodal Research in Vision and Language
Figure 2 for Emerging Trends of Multimodal Research in Vision and Language
Figure 3 for Emerging Trends of Multimodal Research in Vision and Language
Figure 4 for Emerging Trends of Multimodal Research in Vision and Language
Viaarxiv icon

KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis

Add code
Bookmark button
Alert button
May 11, 2020
Deepanway Ghosal, Devamanyu Hazarika, Abhinaba Roy, Navonil Majumder, Rada Mihalcea, Soujanya Poria

Figure 1 for KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis
Figure 2 for KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis
Figure 3 for KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis
Figure 4 for KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis
Viaarxiv icon

MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis

Add code
Bookmark button
Alert button
May 08, 2020
Devamanyu Hazarika, Roger Zimmermann, Soujanya Poria

Figure 1 for MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis
Figure 2 for MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis
Figure 3 for MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis
Figure 4 for MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis
Viaarxiv icon