Picture for Amanpreet Singh

Amanpreet Singh

Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation

Add code
Apr 02, 2021
Figure 1 for Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation
Figure 2 for Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation
Figure 3 for Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation
Figure 4 for Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation
Viaarxiv icon

Transformer is All You Need: Multimodal Multitask Learning with a Unified Transformer

Add code
Feb 22, 2021
Figure 1 for Transformer is All You Need: Multimodal Multitask Learning with a Unified Transformer
Figure 2 for Transformer is All You Need: Multimodal Multitask Learning with a Unified Transformer
Figure 3 for Transformer is All You Need: Multimodal Multitask Learning with a Unified Transformer
Figure 4 for Transformer is All You Need: Multimodal Multitask Learning with a Unified Transformer
Viaarxiv icon

Open4Business(O4B): An Open Access Dataset for Summarizing Business Documents

Add code
Nov 29, 2020
Figure 1 for Open4Business(O4B): An Open Access Dataset for Summarizing Business Documents
Figure 2 for Open4Business(O4B): An Open Access Dataset for Summarizing Business Documents
Figure 3 for Open4Business(O4B): An Open Access Dataset for Summarizing Business Documents
Figure 4 for Open4Business(O4B): An Open Access Dataset for Summarizing Business Documents
Viaarxiv icon

Seeing the Un-Scene: Learning Amodal Semantic Maps for Room Navigation

Add code
Jul 20, 2020
Figure 1 for Seeing the Un-Scene: Learning Amodal Semantic Maps for Room Navigation
Figure 2 for Seeing the Un-Scene: Learning Amodal Semantic Maps for Room Navigation
Figure 3 for Seeing the Un-Scene: Learning Amodal Semantic Maps for Room Navigation
Figure 4 for Seeing the Un-Scene: Learning Amodal Semantic Maps for Room Navigation
Viaarxiv icon

The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes

Add code
Jun 08, 2020
Figure 1 for The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
Figure 2 for The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
Figure 3 for The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
Figure 4 for The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
Viaarxiv icon

Are we pretraining it right? Digging deeper into visio-linguistic pretraining

Add code
Apr 19, 2020
Figure 1 for Are we pretraining it right? Digging deeper into visio-linguistic pretraining
Figure 2 for Are we pretraining it right? Digging deeper into visio-linguistic pretraining
Figure 3 for Are we pretraining it right? Digging deeper into visio-linguistic pretraining
Figure 4 for Are we pretraining it right? Digging deeper into visio-linguistic pretraining
Viaarxiv icon

TextCaps: a Dataset for Image Captioning with Reading Comprehension

Add code
Mar 24, 2020
Figure 1 for TextCaps: a Dataset for Image Captioning with Reading Comprehension
Figure 2 for TextCaps: a Dataset for Image Captioning with Reading Comprehension
Figure 3 for TextCaps: a Dataset for Image Captioning with Reading Comprehension
Figure 4 for TextCaps: a Dataset for Image Captioning with Reading Comprehension
Viaarxiv icon

Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA

Add code
Dec 05, 2019
Figure 1 for Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA
Figure 2 for Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA
Figure 3 for Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA
Figure 4 for Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA
Viaarxiv icon

Towards VQA Models That Can Read

Add code
May 13, 2019
Figure 1 for Towards VQA Models That Can Read
Figure 2 for Towards VQA Models That Can Read
Figure 3 for Towards VQA Models That Can Read
Figure 4 for Towards VQA Models That Can Read
Viaarxiv icon

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

Add code
May 02, 2019
Figure 1 for SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Figure 2 for SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Figure 3 for SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Figure 4 for SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Viaarxiv icon