Alert button
Picture for Mohit Bansal

Mohit Bansal

Alert button

Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality

Add code
Bookmark button
Alert button
Nov 28, 2022
Yichen Jiang, Xiang Zhou, Mohit Bansal

Figure 1 for Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality
Figure 2 for Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality
Figure 3 for Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality
Figure 4 for Mutual Exclusivity Training and Primitive Augmentation to Induce Compositionality
Viaarxiv icon

Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention

Add code
Bookmark button
Alert button
Nov 21, 2022
Zineng Tang, Jaemin Cho, Jie Lei, Mohit Bansal

Figure 1 for Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention
Figure 2 for Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention
Figure 3 for Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention
Figure 4 for Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention
Viaarxiv icon

Evaluating the Factual Consistency of Large Language Models Through Summarization

Add code
Bookmark button
Alert button
Nov 15, 2022
Derek Tam, Anisha Mascarenhas, Shiyue Zhang, Sarah Kwan, Mohit Bansal, Colin Raffel

Figure 1 for Evaluating the Factual Consistency of Large Language Models Through Summarization
Figure 2 for Evaluating the Factual Consistency of Large Language Models Through Summarization
Figure 3 for Evaluating the Factual Consistency of Large Language Models Through Summarization
Figure 4 for Evaluating the Factual Consistency of Large Language Models Through Summarization
Viaarxiv icon

Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations

Add code
Bookmark button
Alert button
Nov 14, 2022
Swarnadeep Saha, Peter Hase, Nazneen Rajani, Mohit Bansal

Figure 1 for Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations
Figure 2 for Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations
Figure 3 for Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations
Figure 4 for Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations
Viaarxiv icon

Evaluating and Improving Factuality in Multimodal Abstractive Summarization

Add code
Bookmark button
Alert button
Nov 04, 2022
David Wan, Mohit Bansal

Figure 1 for Evaluating and Improving Factuality in Multimodal Abstractive Summarization
Figure 2 for Evaluating and Improving Factuality in Multimodal Abstractive Summarization
Figure 3 for Evaluating and Improving Factuality in Multimodal Abstractive Summarization
Figure 4 for Evaluating and Improving Factuality in Multimodal Abstractive Summarization
Viaarxiv icon

Exclusive Supermask Subnetwork Training for Continual Learning

Add code
Bookmark button
Alert button
Oct 18, 2022
Prateek Yadav, Mohit Bansal

Figure 1 for Exclusive Supermask Subnetwork Training for Continual Learning
Figure 2 for Exclusive Supermask Subnetwork Training for Continual Learning
Figure 3 for Exclusive Supermask Subnetwork Training for Continual Learning
Figure 4 for Exclusive Supermask Subnetwork Training for Continual Learning
Viaarxiv icon

TVLT: Textless Vision-Language Transformer

Add code
Bookmark button
Alert button
Sep 28, 2022
Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal

Figure 1 for TVLT: Textless Vision-Language Transformer
Figure 2 for TVLT: Textless Vision-Language Transformer
Figure 3 for TVLT: Textless Vision-Language Transformer
Figure 4 for TVLT: Textless Vision-Language Transformer
Viaarxiv icon

Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees

Add code
Bookmark button
Alert button
Sep 21, 2022
Swarnadeep Saha, Shiyue Zhang, Peter Hase, Mohit Bansal

Figure 1 for Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees
Figure 2 for Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees
Figure 3 for Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees
Figure 4 for Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees
Viaarxiv icon

StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation

Add code
Bookmark button
Alert button
Sep 13, 2022
Adyasha Maharana, Darryl Hannan, Mohit Bansal

Figure 1 for StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation
Figure 2 for StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation
Figure 3 for StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation
Figure 4 for StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation
Viaarxiv icon

Extractive is not Faithful: An Investigation of Broad Unfaithfulness Problems in Extractive Summarization

Add code
Bookmark button
Alert button
Sep 08, 2022
Shiyue Zhang, David Wan, Mohit Bansal

Figure 1 for Extractive is not Faithful: An Investigation of Broad Unfaithfulness Problems in Extractive Summarization
Figure 2 for Extractive is not Faithful: An Investigation of Broad Unfaithfulness Problems in Extractive Summarization
Figure 3 for Extractive is not Faithful: An Investigation of Broad Unfaithfulness Problems in Extractive Summarization
Figure 4 for Extractive is not Faithful: An Investigation of Broad Unfaithfulness Problems in Extractive Summarization
Viaarxiv icon