Alert button
Picture for Isma Hadji

Isma Hadji

Alert button

GePSAn: Generative Procedure Step Anticipation in Cooking Videos

Oct 12, 2023
Mohamed Ashraf Abdelsalam, Samrudhdhi B. Rangrej, Isma Hadji, Nikita Dvornik, Konstantinos G. Derpanis, Afsaneh Fazly

We study the problem of future step anticipation in procedural videos. Given a video of an ongoing procedural activity, we predict a plausible next procedure step described in rich natural language. While most previous work focus on the problem of data scarcity in procedural video datasets, another core challenge of future anticipation is how to account for multiple plausible future realizations in natural settings. This problem has been largely overlooked in previous work. To address this challenge, we frame future step prediction as modelling the distribution of all possible candidates for the next step. Specifically, we design a generative model that takes a series of video clips as input, and generates multiple plausible and diverse candidates (in natural language) for the next step. Following previous work, we side-step the video annotation scarcity by pretraining our model on a large text-based corpus of procedural activities, and then transfer the model to the video domain. Our experiments, both in textual and video domains, show that our model captures diversity in the next step prediction and generates multiple plausible future predictions. Moreover, our model establishes new state-of-the-art results on YouCookII, where it outperforms existing baselines on the next step anticipation. Finally, we also show that our model can successfully transfer from text to the video domain zero-shot, ie, without fine-tuning or adaptation, and produces good-quality future step predictions from video.

* published at ICCV 2023 
Viaarxiv icon

StepFormer: Self-supervised Step Discovery and Localization in Instructional Videos

Apr 26, 2023
Nikita Dvornik, Isma Hadji, Ran Zhang, Konstantinos G. Derpanis, Animesh Garg, Richard P. Wildes, Allan D. Jepson

Figure 1 for StepFormer: Self-supervised Step Discovery and Localization in Instructional Videos
Figure 2 for StepFormer: Self-supervised Step Discovery and Localization in Instructional Videos
Figure 3 for StepFormer: Self-supervised Step Discovery and Localization in Instructional Videos
Figure 4 for StepFormer: Self-supervised Step Discovery and Localization in Instructional Videos

Instructional videos are an important resource to learn procedural tasks from human demonstrations. However, the instruction steps in such videos are typically short and sparse, with most of the video being irrelevant to the procedure. This motivates the need to temporally localize the instruction steps in such videos, i.e. the task called key-step localization. Traditional methods for key-step localization require video-level human annotations and thus do not scale to large datasets. In this work, we tackle the problem with no human supervision and introduce StepFormer, a self-supervised model that discovers and localizes instruction steps in a video. StepFormer is a transformer decoder that attends to the video with learnable queries, and produces a sequence of slots capturing the key-steps in the video. We train our system on a large dataset of instructional videos, using their automatically-generated subtitles as the only source of supervision. In particular, we supervise our system with a sequence of text narrations using an order-aware loss function that filters out irrelevant phrases. We show that our model outperforms all previous unsupervised and weakly-supervised approaches on step detection and localization by a large margin on three challenging benchmarks. Moreover, our model demonstrates an emergent property to solve zero-shot multi-step localization and outperforms all relevant baselines at this task.

* CVPR'23 
Viaarxiv icon

Graph2Vid: Flow graph to Video Grounding forWeakly-supervised Multi-Step Localization

Oct 10, 2022
Nikita Dvornik, Isma Hadji, Hai Pham, Dhaivat Bhatt, Brais Martinez, Afsaneh Fazly, Allan D. Jepson

Figure 1 for Graph2Vid: Flow graph to Video Grounding forWeakly-supervised Multi-Step Localization
Figure 2 for Graph2Vid: Flow graph to Video Grounding forWeakly-supervised Multi-Step Localization
Figure 3 for Graph2Vid: Flow graph to Video Grounding forWeakly-supervised Multi-Step Localization
Figure 4 for Graph2Vid: Flow graph to Video Grounding forWeakly-supervised Multi-Step Localization

In this work, we consider the problem of weakly-supervised multi-step localization in instructional videos. An established approach to this problem is to rely on a given list of steps. However, in reality, there is often more than one way to execute a procedure successfully, by following the set of steps in slightly varying orders. Thus, for successful localization in a given video, recent works require the actual order of procedure steps in the video, to be provided by human annotators at both training and test times. Instead, here, we only rely on generic procedural text that is not tied to a specific video. We represent the various ways to complete the procedure by transforming the list of instructions into a procedure flow graph which captures the partial order of steps. Using the flow graphs reduces both training and test time annotation requirements. To this end, we introduce the new problem of flow graph to video grounding. In this setup, we seek the optimal step ordering consistent with the procedure flow graph and a given video. To solve this problem, we propose a new algorithm - Graph2Vid - that infers the actual ordering of steps in the video and simultaneously localizes them. To show the advantage of our proposed formulation, we extend the CrossTask dataset with procedure flow graph information. Our experiments show that Graph2Vid is both more efficient than the baselines and yields strong step localization results, without the need for step order annotation.

* ECCV 2022  
* ECCV'22, oral 
Viaarxiv icon

P3IV: Probabilistic Procedure Planning from Instructional Videos with Weak Supervision

May 04, 2022
He Zhao, Isma Hadji, Nikita Dvornik, Konstantinos G. Derpanis, Richard P. Wildes, Allan D. Jepson

Figure 1 for P3IV: Probabilistic Procedure Planning from Instructional Videos with Weak Supervision
Figure 2 for P3IV: Probabilistic Procedure Planning from Instructional Videos with Weak Supervision
Figure 3 for P3IV: Probabilistic Procedure Planning from Instructional Videos with Weak Supervision
Figure 4 for P3IV: Probabilistic Procedure Planning from Instructional Videos with Weak Supervision

In this paper, we study the problem of procedure planning in instructional videos. Here, an agent must produce a plausible sequence of actions that can transform the environment from a given start to a desired goal state. When learning procedure planning from instructional videos, most recent work leverages intermediate visual observations as supervision, which requires expensive annotation efforts to localize precisely all the instructional steps in training videos. In contrast, we remove the need for expensive temporal video annotations and propose a weakly supervised approach by learning from natural language instructions. Our model is based on a transformer equipped with a memory module, which maps the start and goal observations to a sequence of plausible actions. Furthermore, we augment our model with a probabilistic generative module to capture the uncertainty inherent to procedure planning, an aspect largely overlooked by previous work. We evaluate our model on three datasets and show our weaklysupervised approach outperforms previous fully supervised state-of-the-art models on multiple metrics.

* Accepted as an oral paper at CVPR 2022 
Viaarxiv icon

Drop-DTW: Aligning Common Signal Between Sequences While Dropping Outliers

Aug 26, 2021
Nikita Dvornik, Isma Hadji, Konstantinos G. Derpanis, Animesh Garg, Allan D. Jepson

Figure 1 for Drop-DTW: Aligning Common Signal Between Sequences While Dropping Outliers
Figure 2 for Drop-DTW: Aligning Common Signal Between Sequences While Dropping Outliers
Figure 3 for Drop-DTW: Aligning Common Signal Between Sequences While Dropping Outliers
Figure 4 for Drop-DTW: Aligning Common Signal Between Sequences While Dropping Outliers

In this work, we consider the problem of sequence-to-sequence alignment for signals containing outliers. Assuming the absence of outliers, the standard Dynamic Time Warping (DTW) algorithm efficiently computes the optimal alignment between two (generally) variable-length sequences. While DTW is robust to temporal shifts and dilations of the signal, it fails to align sequences in a meaningful way in the presence of outliers that can be arbitrarily interspersed in the sequences. To address this problem, we introduce Drop-DTW, a novel algorithm that aligns the common signal between the sequences while automatically dropping the outlier elements from the matching. The entire procedure is implemented as a single dynamic program that is efficient and fully differentiable. In our experiments, we show that Drop-DTW is a robust similarity measure for sequence retrieval and demonstrate its effectiveness as a training loss on diverse applications. With Drop-DTW, we address temporal step localization on instructional videos, representation learning from noisy videos, and cross-modal representation learning for audio-visual retrieval and localization. In all applications, we take a weakly- or unsupervised approach and demonstrate state-of-the-art results under these settings.

Viaarxiv icon

Representation Learning via Global Temporal Alignment and Cycle-Consistency

May 11, 2021
Isma Hadji, Konstantinos G. Derpanis, Allan D. Jepson

Figure 1 for Representation Learning via Global Temporal Alignment and Cycle-Consistency
Figure 2 for Representation Learning via Global Temporal Alignment and Cycle-Consistency
Figure 3 for Representation Learning via Global Temporal Alignment and Cycle-Consistency
Figure 4 for Representation Learning via Global Temporal Alignment and Cycle-Consistency

We introduce a weakly supervised method for representation learning based on aligning temporal sequences (e.g., videos) of the same process (e.g., human action). The main idea is to use the global temporal ordering of latent correspondences across sequence pairs as a supervisory signal. In particular, we propose a loss based on scoring the optimal sequence alignment to train an embedding network. Our loss is based on a novel probabilistic path finding view of dynamic time warping (DTW) that contains the following three key features: (i) the local path routing decisions are contrastive and differentiable, (ii) pairwise distances are cast as probabilities that are contrastive as well, and (iii) our formulation naturally admits a global cycle consistency loss that verifies correspondences. For evaluation, we consider the tasks of fine-grained action classification, few shot learning, and video synchronization. We report significant performance increases over previous methods. In addition, we report two applications of our temporal alignment framework, namely 3D pose reconstruction and fine-grained audio/visual retrieval.

* accepted to CVPR 2021 
Viaarxiv icon

Why Convolutional Networks Learn Oriented Bandpass Filters: Theory and Empirical Support

Nov 30, 2020
Isma Hadji, Richard P. Wildes

Figure 1 for Why Convolutional Networks Learn Oriented Bandpass Filters: Theory and Empirical Support
Figure 2 for Why Convolutional Networks Learn Oriented Bandpass Filters: Theory and Empirical Support
Figure 3 for Why Convolutional Networks Learn Oriented Bandpass Filters: Theory and Empirical Support
Figure 4 for Why Convolutional Networks Learn Oriented Bandpass Filters: Theory and Empirical Support

It has been repeatedly observed that convolutional architectures when applied to image understanding tasks learn oriented bandpass filters. A standard explanation of this result is that these filters reflect the structure of the images that they have been exposed to during training: Natural images typically are locally composed of oriented contours at various scales and oriented bandpass filters are matched to such structure. We offer an alternative explanation based not on the structure of images, but rather on the structure of convolutional architectures. In particular, complex exponentials are the eigenfunctions of convolution. These eigenfunctions are defined globally; however, convolutional architectures operate locally. To enforce locality, one can apply a windowing function to the eigenfunctions, which leads to oriented bandpass filters as the natural operators to be learned with convolutional architectures. From a representational point of view, these filters allow for a local systematic way to characterize and operate on an image or other signal. We offer empirical support for the hypothesis that convolutional networks learn such filters at all of their convolutional layers. While previous research has shown evidence of filters having oriented bandpass characteristics at early layers, ours appears to be the first study to document the predominance of such filter characteristics at all layers. Previous studies have missed this observation because they have concentrated on the cumulative compositional effects of filtering across layers, while we examine the filter characteristics that are present at each layer.

Viaarxiv icon

What Do We Understand About Convolutional Networks?

Mar 23, 2018
Isma Hadji, Richard P. Wildes

Figure 1 for What Do We Understand About Convolutional Networks?
Figure 2 for What Do We Understand About Convolutional Networks?
Figure 3 for What Do We Understand About Convolutional Networks?
Figure 4 for What Do We Understand About Convolutional Networks?

This document will review the most prominent proposals using multilayer convolutional architectures. Importantly, the various components of a typical convolutional network will be discussed through a review of different approaches that base their design decisions on biological findings and/or sound theoretical bases. In addition, the different attempts at understanding ConvNets via visualizations and empirical studies will be reviewed. The ultimate goal is to shed light on the role of each layer of processing involved in a ConvNet architecture, distill what we currently understand about ConvNets and highlight critical open problems.

Viaarxiv icon

A Spatiotemporal Oriented Energy Network for Dynamic Texture Recognition

Aug 22, 2017
Isma Hadji, Richard P. Wildes

Figure 1 for A Spatiotemporal Oriented Energy Network for Dynamic Texture Recognition
Figure 2 for A Spatiotemporal Oriented Energy Network for Dynamic Texture Recognition

This paper presents a novel hierarchical spatiotemporal orientation representation for spacetime image analysis. It is designed to combine the benefits of the multilayer architecture of ConvNets and a more controlled approach to spacetime analysis. A distinguishing aspect of the approach is that unlike most contemporary convolutional networks no learning is involved; rather, all design decisions are specified analytically with theoretical motivations. This approach makes it possible to understand what information is being extracted at each stage and layer of processing as well as to minimize heuristic choices in design. Another key aspect of the network is its recurrent nature, whereby the output of each layer of processing feeds back to the input. To keep the network size manageable across layers, a novel cross-channel feature pooling is proposed. The multilayer architecture that results systematically reveals hierarchical image structure in terms of multiscale, multiorientation properties of visual spacetime. To illustrate its utility, the network has been applied to the task of dynamic texture recognition. Empirical evaluation on multiple standard datasets shows that it sets a new state-of-the-art.

* accepted at ICCV 2017 
Viaarxiv icon