Alert button
Picture for Debaditya Roy

Debaditya Roy

Alert button

ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition

Jul 02, 2023
Debaditya Roy, Dhruv Verma, Basura Fernando

Figure 1 for ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition
Figure 2 for ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition
Figure 3 for ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition
Figure 4 for ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition

Situation Recognition is the task of generating a structured summary of what is happening in an image using an activity verb and the semantic roles played by actors and objects. In this task, the same activity verb can describe a diverse set of situations as well as the same actor or object category can play a diverse set of semantic roles depending on the situation depicted in the image. Hence model needs to understand the context of the image and the visual-linguistic meaning of semantic roles. Therefore, we leverage the CLIP foundational model that has learned the context of images via language descriptions. We show that deeper-and-wider multi-layer perceptron (MLP) blocks obtain noteworthy results for the situation recognition task by using CLIP image and text embedding features and it even outperforms the state-of-the-art CoFormer, a Transformer-based model, thanks to the external implicit visual-linguistic knowledge encapsulated by CLIP and the expressive power of modern MLP block designs. Motivated by this, we design a cross-attention-based Transformer using CLIP visual tokens that model the relation between textual roles and visual entities. Our cross-attention-based Transformer known as ClipSitu XTF outperforms existing state-of-the-art by a large margin of 14.1% on semantic role labelling (value) for top-1 accuracy using imSitu dataset. We will make the code publicly available.

* State-of-the-art results on Situation Recognition 
Viaarxiv icon

Modelling Spatio-Temporal Interactions for Compositional Action Recognition

May 04, 2023
Ramanathan Rajendiran, Debaditya Roy, Basura Fernando

Figure 1 for Modelling Spatio-Temporal Interactions for Compositional Action Recognition
Figure 2 for Modelling Spatio-Temporal Interactions for Compositional Action Recognition
Figure 3 for Modelling Spatio-Temporal Interactions for Compositional Action Recognition
Figure 4 for Modelling Spatio-Temporal Interactions for Compositional Action Recognition

Humans have the natural ability to recognize actions even if the objects involved in the action or the background are changed. Humans can abstract away the action from the appearance of the objects and their context which is referred to as compositionality of actions. Compositional action recognition deals with imparting human-like compositional generalization abilities to action-recognition models. In this regard, extracting the interactions between humans and objects forms the basis of compositional understanding. These interactions are not affected by the appearance biases of the objects or the context. But the context provides additional cues about the interactions between things and stuff. Hence we need to infuse context into the human-object interactions for compositional action recognition. To this end, we first design a spatial-temporal interaction encoder that captures the human-object (things) interactions. The encoder learns the spatio-temporal interaction tokens disentangled from the background context. The interaction tokens are then infused with contextual information from the video tokens to model the interactions between things and stuff. The final context-infused spatio-temporal interaction tokens are used for compositional action recognition. We show the effectiveness of our interaction-centric approach on the compositional Something-Else dataset where we obtain a new state-of-the-art result of 83.8% top-1 accuracy outperforming recent important object-centric methods by a significant margin. Our approach of explicit human-object-stuff interaction modeling is effective even for standard action recognition datasets such as Something-Something-V2 and Epic-Kitchens-100 where we obtain comparable or better performance than state-of-the-art.

* This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible 
Viaarxiv icon

Interaction Visual Transformer for Egocentric Action Anticipation

Nov 25, 2022
Debaditya Roy, Ramanathan Rajendiran, Basura Fernando

Figure 1 for Interaction Visual Transformer for Egocentric Action Anticipation
Figure 2 for Interaction Visual Transformer for Egocentric Action Anticipation
Figure 3 for Interaction Visual Transformer for Egocentric Action Anticipation
Figure 4 for Interaction Visual Transformer for Egocentric Action Anticipation

Human-object interaction is one of the most important visual cues that has not been explored for egocentric action anticipation. We propose a novel Transformer variant to model interactions by computing the change in the appearance of objects and human hands due to the execution of the actions and use those changes to refine the video representation. Specifically, we model interactions between hands and objects using Spatial Cross-Attention (SCA) and further infuse contextual information using Trajectory Cross-Attention to obtain environment-refined interaction tokens. Using these tokens, we construct an interaction-centric video representation for action anticipation. We term our model InAViT which achieves state-of-the-art action anticipation performance on large-scale egocentric datasets EPICKTICHENS100 (EK100) and EGTEA Gaze+. InAViT outperforms other visual transformer-based methods including object-centric video representation. On the EK100 evaluation server, InAViT is the top-performing method on the public leaderboard (at the time of submission) where it outperforms the second-best model by 3.3% on mean-top5 recall.

Viaarxiv icon

Predicting the Next Action by Modeling the Abstract Goal

Sep 12, 2022
Debaditya Roy, Basura Fernando

Figure 1 for Predicting the Next Action by Modeling the Abstract Goal
Figure 2 for Predicting the Next Action by Modeling the Abstract Goal
Figure 3 for Predicting the Next Action by Modeling the Abstract Goal
Figure 4 for Predicting the Next Action by Modeling the Abstract Goal

The problem of anticipating human actions is an inherently uncertain one. However, we can reduce this uncertainty if we have a sense of the goal that the actor is trying to achieve. Here, we present an action anticipation model that leverages goal information for the purpose of reducing the uncertainty in future predictions. Since we do not possess goal information or the observed actions during inference, we resort to visual representation to encapsulate information about both actions and goals. Through this, we derive a novel concept called abstract goal which is conditioned on observed sequences of visual features for action anticipation. We design the abstract goal as a distribution whose parameters are estimated using a variational recurrent network. We sample multiple candidates for the next action and introduce a goal consistency measure to determine the best candidate that follows from the abstract goal. Our method obtains impressive results on the very challenging Epic-Kitchens55 (EK55), EK100, and EGTEA Gaze+ datasets. We obtain absolute improvements of +13.69, +11.24, and +5.19 for Top-1 verb, Top-1 noun, and Top-1 action anticipation accuracy respectively over prior state-of-the-art methods for seen kitchens (S1) of EK55. Similarly, we also obtain significant improvements in the unseen kitchens (S2) set for Top-1 verb (+10.75), noun (+5.84) and action (+2.87) anticipation. Similar trend is observed for EGTEA Gaze+ dataset, where absolute improvement of +9.9, +13.1 and +6.8 is obtained for noun, verb, and action anticipation. It is through the submission of this paper that our method is currently the new state-of-the-art for action anticipation in EK55 and EGTEA Gaze+ https://competitions.codalab.org/competitions/20071#results Code available at https://github.com/debadityaroy/Abstract_Goal

Viaarxiv icon

FlowCaps: Optical Flow Estimation with Capsule Networks For Action Recognition

Nov 08, 2020
Vinoj Jayasundara, Debaditya Roy, Basura Fernando

Figure 1 for FlowCaps: Optical Flow Estimation with Capsule Networks For Action Recognition
Figure 2 for FlowCaps: Optical Flow Estimation with Capsule Networks For Action Recognition
Figure 3 for FlowCaps: Optical Flow Estimation with Capsule Networks For Action Recognition
Figure 4 for FlowCaps: Optical Flow Estimation with Capsule Networks For Action Recognition

Capsule networks (CapsNets) have recently shown promise to excel in most computer vision tasks, especially pertaining to scene understanding. In this paper, we explore CapsNet's capabilities in optical flow estimation, a task at which convolutional neural networks (CNNs) have already outperformed other approaches. We propose a CapsNet-based architecture, termed FlowCaps, which attempts to a) achieve better correspondence matching via finer-grained, motion-specific, and more-interpretable encoding crucial for optical flow estimation, b) perform better-generalizable optical flow estimation, c) utilize lesser ground truth data, and d) significantly reduce the computational complexity in achieving good performance, in comparison to its CNN-counterparts.

Viaarxiv icon

Defining Traffic States using Spatio-temporal Traffic Graphs

Jul 27, 2020
Debaditya Roy, K. Naveen Kumar, C. Krishna Mohan

Figure 1 for Defining Traffic States using Spatio-temporal Traffic Graphs
Figure 2 for Defining Traffic States using Spatio-temporal Traffic Graphs
Figure 3 for Defining Traffic States using Spatio-temporal Traffic Graphs
Figure 4 for Defining Traffic States using Spatio-temporal Traffic Graphs

Intersections are one of the main sources of congestion and hence, it is important to understand traffic behavior at intersections. Particularly, in developing countries with high vehicle density, mixed traffic type, and lane-less driving behavior, it is difficult to distinguish between congested and normal traffic behavior. In this work, we propose a way to understand the traffic state of smaller spatial regions at intersections using traffic graphs. The way these traffic graphs evolve over time reveals different traffic states - a) a congestion is forming (clumping), the congestion is dispersing (unclumping), or c) the traffic is flowing normally (neutral). We train a spatio-temporal deep network to identify these changes. Also, we introduce a large dataset called EyeonTraffic (EoT) containing 3 hours of aerial videos collected at 3 busy intersections in Ahmedabad, India. Our experiments on the EoT dataset show that the traffic graphs can help in correctly identifying congestion-prone behavior in different spatial regions of an intersection.

* Accepted in 23rd IEEE International Conference on Intelligent Transportation Systems September 20 to 23, 2020. 6 pages, 6 figures 
Viaarxiv icon

Detection of Collision-Prone Vehicle Behavior at Intersections using Siamese Interaction LSTM

Dec 10, 2019
Debaditya Roy, Tetsuhiro Ishizaka, Krishna Mohan C., Atsushi Fukuda

Figure 1 for Detection of Collision-Prone Vehicle Behavior at Intersections using Siamese Interaction LSTM
Figure 2 for Detection of Collision-Prone Vehicle Behavior at Intersections using Siamese Interaction LSTM
Figure 3 for Detection of Collision-Prone Vehicle Behavior at Intersections using Siamese Interaction LSTM
Figure 4 for Detection of Collision-Prone Vehicle Behavior at Intersections using Siamese Interaction LSTM

As a large proportion of road accidents occur at intersections, monitoring traffic safety of intersections is important. Existing approaches are designed to investigate accidents in lane-based traffic. However, such approaches are not suitable in a lane-less mixed-traffic environment where vehicles often ply very close to each other. Hence, we propose an approach called Siamese Interaction Long Short-Term Memory network (SILSTM) to detect collision prone vehicle behavior. The SILSTM network learns the interaction trajectory of a vehicle that describes the interactions of a vehicle with its neighbors at an intersection. Among the hundreds of interactions for every vehicle, there maybe only some interactions which may be unsafe and hence, a temporal attention layer is used in the SILSTM network. Furthermore, the comparison of interaction trajectories requires labeling the trajectories as either unsafe or safe, but such a distinction is highly subjective, especially in lane-less traffic. Hence, in this work, we compute the characteristics of interaction trajectories involved in accidents using the collision energy model. The interaction trajectories that match accident characteristics are labeled as unsafe while the rest are considered safe. Finally, there is no existing dataset that allows us to monitor a particular intersection for a long duration. Therefore, we introduce the SkyEye dataset that contains 1 hour of continuous aerial footage from each of the 4 chosen intersections in the city of Ahmedabad in India. A detailed evaluation of SILSTM on the SkyEye dataset shows that unsafe (collision-prone) interaction trajectories can be effectively detected at different intersections.

* 10 pages, 4 figures, submitted to IEEE Transactions on Intelligent Transportation Systems 
Viaarxiv icon