Alert button
Picture for Basura Fernando

Basura Fernando

Alert button

CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual Scenes

Add code
Bookmark button
Alert button
Apr 01, 2024
Ting En Lam, Yuhan Chen, Elston Tan, Eric Peh, Ruirui Chen, Paritosh Parmar, Basura Fernando

Viaarxiv icon

Zero Shot Open-ended Video Inference

Add code
Bookmark button
Alert button
Jan 23, 2024
Ee Yeo Keat, Zhang Hao, Alexander Matyasko, Basura Fernando

Viaarxiv icon

Learning to Visually Connect Actions and their Effects

Add code
Bookmark button
Alert button
Jan 19, 2024
Eric Peh, Paritosh Parmar, Basura Fernando

Viaarxiv icon

Motion Flow Matching for Human Motion Synthesis and Editing

Add code
Bookmark button
Alert button
Dec 14, 2023
Vincent Tao Hu, Wenzhe Yin, Pingchuan Ma, Yunlu Chen, Basura Fernando, Yuki M Asano, Efstratios Gavves, Pascal Mettes, Bjorn Ommer, Cees G. M. Snoek

Viaarxiv icon

Semi-supervised multimodal coreference resolution in image narrations

Add code
Bookmark button
Alert button
Oct 20, 2023
Arushi Goel, Basura Fernando, Frank Keller, Hakan Bilen

Viaarxiv icon

ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition

Add code
Bookmark button
Alert button
Jul 02, 2023
Debaditya Roy, Dhruv Verma, Basura Fernando

Figure 1 for ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition
Figure 2 for ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition
Figure 3 for ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition
Figure 4 for ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition
Viaarxiv icon

Revealing the Illusion of Joint Multimodal Understanding in VideoQA Models

Add code
Bookmark button
Alert button
Jun 15, 2023
Ishaan Singh Rawal, Shantanu Jaiswal, Basura Fernando, Cheston Tan

Figure 1 for Revealing the Illusion of Joint Multimodal Understanding in VideoQA Models
Figure 2 for Revealing the Illusion of Joint Multimodal Understanding in VideoQA Models
Figure 3 for Revealing the Illusion of Joint Multimodal Understanding in VideoQA Models
Figure 4 for Revealing the Illusion of Joint Multimodal Understanding in VideoQA Models
Viaarxiv icon

Modelling Spatio-Temporal Interactions for Compositional Action Recognition

Add code
Bookmark button
Alert button
May 04, 2023
Ramanathan Rajendiran, Debaditya Roy, Basura Fernando

Figure 1 for Modelling Spatio-Temporal Interactions for Compositional Action Recognition
Figure 2 for Modelling Spatio-Temporal Interactions for Compositional Action Recognition
Figure 3 for Modelling Spatio-Temporal Interactions for Compositional Action Recognition
Figure 4 for Modelling Spatio-Temporal Interactions for Compositional Action Recognition
Viaarxiv icon

Fine-Grained Regional Prompt Tuning for Visual Abductive Reasoning

Add code
Bookmark button
Alert button
Mar 18, 2023
Hao Zhang, Basura Fernando

Figure 1 for Fine-Grained Regional Prompt Tuning for Visual Abductive Reasoning
Figure 2 for Fine-Grained Regional Prompt Tuning for Visual Abductive Reasoning
Figure 3 for Fine-Grained Regional Prompt Tuning for Visual Abductive Reasoning
Figure 4 for Fine-Grained Regional Prompt Tuning for Visual Abductive Reasoning
Viaarxiv icon

Who are you referring to? Weakly supervised coreference resolution with multimodal grounding

Add code
Bookmark button
Alert button
Nov 26, 2022
Arushi Goel, Basura Fernando, Frank Keller, Hakan Bilen

Figure 1 for Who are you referring to? Weakly supervised coreference resolution with multimodal grounding
Figure 2 for Who are you referring to? Weakly supervised coreference resolution with multimodal grounding
Figure 3 for Who are you referring to? Weakly supervised coreference resolution with multimodal grounding
Figure 4 for Who are you referring to? Weakly supervised coreference resolution with multimodal grounding
Viaarxiv icon