Alert button
Picture for Igor Shalyminov

Igor Shalyminov

Alert button

Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders

Add code
Bookmark button
Alert button
Mar 07, 2024
Yuwei Zhang, Siffi Singh, Sailik Sengupta, Igor Shalyminov, Hang Su, Hwanjun Song, Saab Mansour

Figure 1 for Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders
Figure 2 for Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders
Figure 3 for Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders
Figure 4 for Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders
Viaarxiv icon

Semi-Supervised Dialogue Abstractive Summarization via High-Quality Pseudolabel Selection

Add code
Bookmark button
Alert button
Mar 06, 2024
Jianfeng He, Hang Su, Jason Cai, Igor Shalyminov, Hwanjun Song, Saab Mansour

Figure 1 for Semi-Supervised Dialogue Abstractive Summarization via High-Quality Pseudolabel Selection
Figure 2 for Semi-Supervised Dialogue Abstractive Summarization via High-Quality Pseudolabel Selection
Figure 3 for Semi-Supervised Dialogue Abstractive Summarization via High-Quality Pseudolabel Selection
Figure 4 for Semi-Supervised Dialogue Abstractive Summarization via High-Quality Pseudolabel Selection
Viaarxiv icon

MAGID: An Automated Pipeline for Generating Synthetic Multi-modal Datasets

Add code
Bookmark button
Alert button
Mar 05, 2024
Hossein Aboutalebi, Hwanjun Song, Yusheng Xie, Arshit Gupta, Justin Sun, Hang Su, Igor Shalyminov, Nikolaos Pappas, Siffi Singh, Saab Mansour

Figure 1 for MAGID: An Automated Pipeline for Generating Synthetic Multi-modal Datasets
Figure 2 for MAGID: An Automated Pipeline for Generating Synthetic Multi-modal Datasets
Figure 3 for MAGID: An Automated Pipeline for Generating Synthetic Multi-modal Datasets
Figure 4 for MAGID: An Automated Pipeline for Generating Synthetic Multi-modal Datasets
Viaarxiv icon

TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue Summarization

Add code
Bookmark button
Alert button
Feb 20, 2024
Liyan Tang, Igor Shalyminov, Amy Wing-mei Wong, Jon Burnsky, Jake W. Vincent, Yu'an Yang, Siffi Singh, Song Feng, Hwanjun Song, Hang Su, Lijia Sun, Yi Zhang, Saab Mansour, Kathleen McKeown

Viaarxiv icon

Enhancing Abstractiveness of Summarization Models through Calibrated Distillation

Add code
Bookmark button
Alert button
Oct 20, 2023
Hwanjun Song, Igor Shalyminov, Hang Su, Siffi Singh, Kaisheng Yao, Saab Mansour

Figure 1 for Enhancing Abstractiveness of Summarization Models through Calibrated Distillation
Figure 2 for Enhancing Abstractiveness of Summarization Models through Calibrated Distillation
Figure 3 for Enhancing Abstractiveness of Summarization Models through Calibrated Distillation
Figure 4 for Enhancing Abstractiveness of Summarization Models through Calibrated Distillation
Viaarxiv icon

Data-Efficient Methods for Dialogue Systems

Add code
Bookmark button
Alert button
Dec 05, 2020
Igor Shalyminov

Figure 1 for Data-Efficient Methods for Dialogue Systems
Figure 2 for Data-Efficient Methods for Dialogue Systems
Figure 3 for Data-Efficient Methods for Dialogue Systems
Figure 4 for Data-Efficient Methods for Dialogue Systems
Viaarxiv icon

Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation

Add code
Bookmark button
Alert button
Mar 06, 2020
Igor Shalyminov, Alessandro Sordoni, Adam Atkinson, Hannes Schulz

Figure 1 for Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation
Figure 2 for Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation
Figure 3 for Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation
Figure 4 for Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation
Viaarxiv icon

Data-Efficient Goal-Oriented Conversation with Dialogue Knowledge Transfer Networks

Add code
Bookmark button
Alert button
Oct 03, 2019
Igor Shalyminov, Sungjin Lee, Arash Eshghi, Oliver Lemon

Figure 1 for Data-Efficient Goal-Oriented Conversation with Dialogue Knowledge Transfer Networks
Figure 2 for Data-Efficient Goal-Oriented Conversation with Dialogue Knowledge Transfer Networks
Figure 3 for Data-Efficient Goal-Oriented Conversation with Dialogue Knowledge Transfer Networks
Figure 4 for Data-Efficient Goal-Oriented Conversation with Dialogue Knowledge Transfer Networks
Viaarxiv icon

Few-Shot Dialogue Generation Without Annotated Data: A Transfer Learning Approach

Add code
Bookmark button
Alert button
Aug 16, 2019
Igor Shalyminov, Sungjin Lee, Arash Eshghi, Oliver Lemon

Figure 1 for Few-Shot Dialogue Generation Without Annotated Data: A Transfer Learning Approach
Figure 2 for Few-Shot Dialogue Generation Without Annotated Data: A Transfer Learning Approach
Figure 3 for Few-Shot Dialogue Generation Without Annotated Data: A Transfer Learning Approach
Figure 4 for Few-Shot Dialogue Generation Without Annotated Data: A Transfer Learning Approach
Viaarxiv icon