Alert button
Picture for Andreas Bulling

Andreas Bulling

Alert button

Multimodal Integration of Human-Like Attention in Visual Question Answering

Add code
Bookmark button
Alert button
Sep 27, 2021
Ekta Sood, Fabian Kögel, Philipp Müller, Dominike Thomas, Mihai Bace, Andreas Bulling

Figure 1 for Multimodal Integration of Human-Like Attention in Visual Question Answering
Figure 2 for Multimodal Integration of Human-Like Attention in Visual Question Answering
Figure 3 for Multimodal Integration of Human-Like Attention in Visual Question Answering
Figure 4 for Multimodal Integration of Human-Like Attention in Visual Question Answering
Viaarxiv icon

VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering

Add code
Bookmark button
Alert button
Sep 27, 2021
Ekta Sood, Fabian Kögel, Florian Strohm, Prajit Dhar, Andreas Bulling

Figure 1 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Figure 2 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Figure 3 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Figure 4 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Viaarxiv icon

Neural Photofit: Gaze-based Mental Image Reconstruction

Add code
Bookmark button
Alert button
Aug 17, 2021
Florian Strohm, Ekta Sood, Sven Mayer, Philipp Müller, Mihai Bâce, Andreas Bulling

Figure 1 for Neural Photofit: Gaze-based Mental Image Reconstruction
Figure 2 for Neural Photofit: Gaze-based Mental Image Reconstruction
Figure 3 for Neural Photofit: Gaze-based Mental Image Reconstruction
Figure 4 for Neural Photofit: Gaze-based Mental Image Reconstruction
Viaarxiv icon

Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention

Add code
Bookmark button
Alert button
Oct 27, 2020
Ekta Sood, Simon Tannert, Philipp Mueller, Andreas Bulling

Figure 1 for Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention
Figure 2 for Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention
Figure 3 for Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention
Figure 4 for Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention
Viaarxiv icon

Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension

Add code
Bookmark button
Alert button
Oct 27, 2020
Ekta Sood, Simon Tannert, Diego Frassinelli, Andreas Bulling, Ngoc Thang Vu

Figure 1 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 2 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 3 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 4 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Viaarxiv icon

Accurate and Robust Eye Contact Detection During Everyday Mobile Device Interactions

Add code
Bookmark button
Alert button
Jul 25, 2019
Mihai Bâce, Sander Staal, Andreas Bulling

Figure 1 for Accurate and Robust Eye Contact Detection During Everyday Mobile Device Interactions
Figure 2 for Accurate and Robust Eye Contact Detection During Everyday Mobile Device Interactions
Figure 3 for Accurate and Robust Eye Contact Detection During Everyday Mobile Device Interactions
Figure 4 for Accurate and Robust Eye Contact Detection During Everyday Mobile Device Interactions
Viaarxiv icon

How far are we from quantifying visual attention in mobile HCI?

Add code
Bookmark button
Alert button
Jul 25, 2019
Mihai Bâce, Sander Staal, Andreas Bulling

Figure 1 for How far are we from quantifying visual attention in mobile HCI?
Figure 2 for How far are we from quantifying visual attention in mobile HCI?
Figure 3 for How far are we from quantifying visual attention in mobile HCI?
Figure 4 for How far are we from quantifying visual attention in mobile HCI?
Viaarxiv icon

Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings

Add code
Bookmark button
Alert button
May 12, 2018
Seonwook Park, Xucong Zhang, Andreas Bulling, Otmar Hilliges

Figure 1 for Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings
Figure 2 for Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings
Figure 3 for Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings
Figure 4 for Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings
Viaarxiv icon