Alert button
Picture for Andreas Bulling

Andreas Bulling

Alert button

Neural Reasoning About Agents' Goals, Preferences, and Actions

Dec 12, 2023
Matteo Bortoletto, Lei Shi, Andreas Bulling

Viaarxiv icon

$\mathbb{VD}$-$\mathbb{GR}$: Boosting $\mathbb{V}$isual $\mathbb{D}$ialog with Cascaded Spatial-Temporal Multi-Modal $\mathbb{GR}$aphs

Oct 25, 2023
Adnen Abdessaied, Lei Shi, Andreas Bulling

Figure 1 for $\mathbb{VD}$-$\mathbb{GR}$: Boosting $\mathbb{V}$isual $\mathbb{D}$ialog with Cascaded Spatial-Temporal Multi-Modal $\mathbb{GR}$aphs
Figure 2 for $\mathbb{VD}$-$\mathbb{GR}$: Boosting $\mathbb{V}$isual $\mathbb{D}$ialog with Cascaded Spatial-Temporal Multi-Modal $\mathbb{GR}$aphs
Figure 3 for $\mathbb{VD}$-$\mathbb{GR}$: Boosting $\mathbb{V}$isual $\mathbb{D}$ialog with Cascaded Spatial-Temporal Multi-Modal $\mathbb{GR}$aphs
Figure 4 for $\mathbb{VD}$-$\mathbb{GR}$: Boosting $\mathbb{V}$isual $\mathbb{D}$ialog with Cascaded Spatial-Temporal Multi-Modal $\mathbb{GR}$aphs
Viaarxiv icon

MultiMediate'23: Engagement Estimation and Bodily Behaviour Recognition in Social Interactions

Aug 16, 2023
Philipp Müller, Michal Balazia, Tobias Baur, Michael Dietz, Alexander Heimerl, Dominik Schiller, Mohammed Guermal, Dominike Thomas, François Brémond, Jan Alexandersson, Elisabeth André, Andreas Bulling

Figure 1 for MultiMediate'23: Engagement Estimation and Bodily Behaviour Recognition in Social Interactions
Figure 2 for MultiMediate'23: Engagement Estimation and Bodily Behaviour Recognition in Social Interactions
Figure 3 for MultiMediate'23: Engagement Estimation and Bodily Behaviour Recognition in Social Interactions
Figure 4 for MultiMediate'23: Engagement Estimation and Bodily Behaviour Recognition in Social Interactions
Viaarxiv icon

Int-HRL: Towards Intention-based Hierarchical Reinforcement Learning

Jun 20, 2023
Anna Penzkofer, Simon Schaefer, Florian Strohm, Mihai Bâce, Stefan Leutenegger, Andreas Bulling

Figure 1 for Int-HRL: Towards Intention-based Hierarchical Reinforcement Learning
Figure 2 for Int-HRL: Towards Intention-based Hierarchical Reinforcement Learning
Figure 3 for Int-HRL: Towards Intention-based Hierarchical Reinforcement Learning
Viaarxiv icon

Neuro-Symbolic Visual Dialog

Aug 22, 2022
Adnen Abdessaied, Mihai Bâce, Andreas Bulling

Figure 1 for Neuro-Symbolic Visual Dialog
Figure 2 for Neuro-Symbolic Visual Dialog
Figure 3 for Neuro-Symbolic Visual Dialog
Figure 4 for Neuro-Symbolic Visual Dialog
Viaarxiv icon

Gaze-enhanced Crossmodal Embeddings for Emotion Recognition

Apr 30, 2022
Ahmed Abdou, Ekta Sood, Philipp Müller, Andreas Bulling

Figure 1 for Gaze-enhanced Crossmodal Embeddings for Emotion Recognition
Figure 2 for Gaze-enhanced Crossmodal Embeddings for Emotion Recognition
Figure 3 for Gaze-enhanced Crossmodal Embeddings for Emotion Recognition
Figure 4 for Gaze-enhanced Crossmodal Embeddings for Emotion Recognition
Viaarxiv icon

Scanpath Prediction on Information Visualisations

Dec 04, 2021
Yao Wang, Mihai Bâce, Andreas Bulling

Figure 1 for Scanpath Prediction on Information Visualisations
Figure 2 for Scanpath Prediction on Information Visualisations
Figure 3 for Scanpath Prediction on Information Visualisations
Figure 4 for Scanpath Prediction on Information Visualisations
Viaarxiv icon

Multimodal Integration of Human-Like Attention in Visual Question Answering

Sep 27, 2021
Ekta Sood, Fabian Kögel, Philipp Müller, Dominike Thomas, Mihai Bace, Andreas Bulling

Figure 1 for Multimodal Integration of Human-Like Attention in Visual Question Answering
Figure 2 for Multimodal Integration of Human-Like Attention in Visual Question Answering
Figure 3 for Multimodal Integration of Human-Like Attention in Visual Question Answering
Figure 4 for Multimodal Integration of Human-Like Attention in Visual Question Answering
Viaarxiv icon

VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering

Sep 27, 2021
Ekta Sood, Fabian Kögel, Florian Strohm, Prajit Dhar, Andreas Bulling

Figure 1 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Figure 2 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Figure 3 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Figure 4 for VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Viaarxiv icon

Neural Photofit: Gaze-based Mental Image Reconstruction

Aug 17, 2021
Florian Strohm, Ekta Sood, Sven Mayer, Philipp Müller, Mihai Bâce, Andreas Bulling

Figure 1 for Neural Photofit: Gaze-based Mental Image Reconstruction
Figure 2 for Neural Photofit: Gaze-based Mental Image Reconstruction
Figure 3 for Neural Photofit: Gaze-based Mental Image Reconstruction
Figure 4 for Neural Photofit: Gaze-based Mental Image Reconstruction
Viaarxiv icon