Alert button
Picture for Yashar Mehdad

Yashar Mehdad

Alert button

Adapting Pretrained Text-to-Text Models for Long Text Sequences

Add code
Bookmark button
Alert button
Sep 21, 2022
Wenhan Xiong, Anchit Gupta, Shubham Toshniwal, Yashar Mehdad, Wen-tau Yih

Figure 1 for Adapting Pretrained Text-to-Text Models for Long Text Sequences
Figure 2 for Adapting Pretrained Text-to-Text Models for Long Text Sequences
Figure 3 for Adapting Pretrained Text-to-Text Models for Long Text Sequences
Figure 4 for Adapting Pretrained Text-to-Text Models for Long Text Sequences
Viaarxiv icon

BiT: Robustly Binarized Multi-distilled Transformer

Add code
Bookmark button
Alert button
May 25, 2022
Zechun Liu, Barlas Oguz, Aasish Pappu, Lin Xiao, Scott Yih, Meng Li, Raghuraman Krishnamoorthi, Yashar Mehdad

Figure 1 for BiT: Robustly Binarized Multi-distilled Transformer
Figure 2 for BiT: Robustly Binarized Multi-distilled Transformer
Figure 3 for BiT: Robustly Binarized Multi-distilled Transformer
Figure 4 for BiT: Robustly Binarized Multi-distilled Transformer
Viaarxiv icon

CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning

Add code
Bookmark button
Alert button
Dec 16, 2021
Xiangru Tang, Arjun Nair, Borui Wang, Bingyao Wang, Jai Desai, Aaron Wade, Haoran Li, Asli Celikyilmaz, Yashar Mehdad, Dragomir Radev

Figure 1 for CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning
Figure 2 for CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning
Figure 3 for CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning
Figure 4 for CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning
Viaarxiv icon

Simple Local Attentions Remain Competitive for Long-Context Tasks

Add code
Bookmark button
Alert button
Dec 14, 2021
Wenhan Xiong, Barlas Oğuz, Anchit Gupta, Xilun Chen, Diana Liskovich, Omer Levy, Wen-tau Yih, Yashar Mehdad

Figure 1 for Simple Local Attentions Remain Competitive for Long-Context Tasks
Figure 2 for Simple Local Attentions Remain Competitive for Long-Context Tasks
Figure 3 for Simple Local Attentions Remain Competitive for Long-Context Tasks
Figure 4 for Simple Local Attentions Remain Competitive for Long-Context Tasks
Viaarxiv icon

Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?

Add code
Bookmark button
Alert button
Oct 13, 2021
Xilun Chen, Kushal Lakhotia, Barlas Oğuz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta, Wen-tau Yih

Figure 1 for Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?
Figure 2 for Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?
Figure 3 for Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?
Figure 4 for Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?
Viaarxiv icon

Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries

Add code
Bookmark button
Alert button
Sep 21, 2021
Xiangru Tang, Alexander R. Fabbri, Ziming Mao, Griffin Adams, Borui Wang, Haoran Li, Yashar Mehdad, Dragomir Radev

Figure 1 for Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries
Figure 2 for Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries
Figure 3 for Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries
Figure 4 for Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries
Viaarxiv icon

Domain-matched Pre-training Tasks for Dense Retrieval

Add code
Bookmark button
Alert button
Jul 28, 2021
Barlas Oğuz, Kushal Lakhotia, Anchit Gupta, Patrick Lewis, Vladimir Karpukhin, Aleksandra Piktus, Xilun Chen, Sebastian Riedel, Wen-tau Yih, Sonal Gupta, Yashar Mehdad

Figure 1 for Domain-matched Pre-training Tasks for Dense Retrieval
Figure 2 for Domain-matched Pre-training Tasks for Dense Retrieval
Figure 3 for Domain-matched Pre-training Tasks for Dense Retrieval
Figure 4 for Domain-matched Pre-training Tasks for Dense Retrieval
Viaarxiv icon

Syntax-augmented Multilingual BERT for Cross-lingual Transfer

Add code
Bookmark button
Alert button
Jun 03, 2021
Wasi Uddin Ahmad, Haoran Li, Kai-Wei Chang, Yashar Mehdad

Figure 1 for Syntax-augmented Multilingual BERT for Cross-lingual Transfer
Figure 2 for Syntax-augmented Multilingual BERT for Cross-lingual Transfer
Figure 3 for Syntax-augmented Multilingual BERT for Cross-lingual Transfer
Figure 4 for Syntax-augmented Multilingual BERT for Cross-lingual Transfer
Viaarxiv icon

ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining

Add code
Bookmark button
Alert button
Jun 01, 2021
Alexander R. Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, Dragomir Radev

Figure 1 for ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining
Figure 2 for ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining
Figure 3 for ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining
Figure 4 for ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining
Viaarxiv icon

EASE: Extractive-Abstractive Summarization with Explanations

Add code
Bookmark button
Alert button
May 14, 2021
Haoran Li, Arash Einolghozati, Srinivasan Iyer, Bhargavi Paranjape, Yashar Mehdad, Sonal Gupta, Marjan Ghazvininejad

Figure 1 for EASE: Extractive-Abstractive Summarization with Explanations
Figure 2 for EASE: Extractive-Abstractive Summarization with Explanations
Figure 3 for EASE: Extractive-Abstractive Summarization with Explanations
Figure 4 for EASE: Extractive-Abstractive Summarization with Explanations
Viaarxiv icon