Alert button
Picture for Dima Damen

Dima Damen

Alert button

An Outlook into the Future of Egocentric Vision

Aug 14, 2023
Chiara Plizzari, Gabriele Goletto, Antonino Furnari, Siddhant Bansal, Francesco Ragusa, Giovanni Maria Farinella, Dima Damen, Tatiana Tommasi

What will the future be? We wonder! In this survey, we explore the gap between current research in egocentric vision and the ever-anticipated future, where wearable computing, with outward facing cameras and digital overlays, is expected to be integrated in our every day lives. To understand this gap, the article starts by envisaging the future through character-based stories, showcasing through examples the limitations of current technology. We then provide a mapping between this future and previously defined research tasks. For each task, we survey its seminal works, current state-of-the-art methodologies and available datasets, then reflect on shortcomings that limit its applicability to future research. Note that this survey focuses on software models for egocentric vision, independent of any specific hardware. The paper concludes with recommendations for areas of immediate explorations so as to unlock our path to the future always-on, personalised and life-enhancing egocentric vision.

* We invite comments, suggestions and corrections here: https://openreview.net/forum?id=V3974SUk1w 
Viaarxiv icon

EPIC Fields: Marrying 3D Geometry and Video Understanding

Jun 14, 2023
Vadim Tschernezki, Ahmad Darkhalil, Zhifan Zhu, David Fouhey, Iro Laina, Diane Larlus, Dima Damen, Andrea Vedaldi

Figure 1 for EPIC Fields: Marrying 3D Geometry and Video Understanding
Figure 2 for EPIC Fields: Marrying 3D Geometry and Video Understanding
Figure 3 for EPIC Fields: Marrying 3D Geometry and Video Understanding
Figure 4 for EPIC Fields: Marrying 3D Geometry and Video Understanding

Neural rendering is fuelling a unification of learning, 3D geometry and video understanding that has been waiting for more than two decades. Progress, however, is still hampered by a lack of suitable datasets and benchmarks. To address this gap, we introduce EPIC Fields, an augmentation of EPIC-KITCHENS with 3D camera information. Like other datasets for neural rendering, EPIC Fields removes the complex and expensive step of reconstructing cameras using photogrammetry, and allows researchers to focus on modelling problems. We illustrate the challenge of photogrammetry in egocentric videos of dynamic actions and propose innovations to address them. Compared to other neural rendering datasets, EPIC Fields is better tailored to video understanding because it is paired with labelled action segments and the recent VISOR segment annotations. To further motivate the community, we also evaluate two benchmark tasks in neural rendering and segmenting dynamic objects, with strong baselines that showcase what is not possible today. We also highlight the advantage of geometry in semi-supervised video object segmentations on the VISOR annotations. EPIC Fields reconstructs 96% of videos in EPICKITCHENS, registering 19M frames in 99 hours recorded in 45 kitchens.

* 20 pages, 16 figures. Project Webpage: http://epic-kitchens.github.io/epic-fields 
Viaarxiv icon

What can a cook in Italy teach a mechanic in India? Action Recognition Generalisation Over Scenarios and Locations

Jun 14, 2023
Chiara Plizzari, Toby Perrett, Barbara Caputo, Dima Damen

Figure 1 for What can a cook in Italy teach a mechanic in India? Action Recognition Generalisation Over Scenarios and Locations
Figure 2 for What can a cook in Italy teach a mechanic in India? Action Recognition Generalisation Over Scenarios and Locations
Figure 3 for What can a cook in Italy teach a mechanic in India? Action Recognition Generalisation Over Scenarios and Locations
Figure 4 for What can a cook in Italy teach a mechanic in India? Action Recognition Generalisation Over Scenarios and Locations

We propose and address a new generalisation problem: can a model trained for action recognition successfully classify actions when they are performed within a previously unseen scenario and in a previously unseen location? To answer this question, we introduce the Action Recognition Generalisation Over scenarios and locations dataset (ARGO1M), which contains 1.1M video clips from the large-scale Ego4D dataset, across 10 scenarios and 13 locations. We demonstrate recognition models struggle to generalise over 10 proposed test splits, each of an unseen scenario in an unseen location. We thus propose CIR, a method to represent each video as a Cross-Instance Reconstruction of videos from other domains. Reconstructions are paired with text narrations to guide the learning of a domain generalisable representation. We provide extensive analysis and ablations on ARGO1M that show CIR outperforms prior domain generalisation works on all test splits. Code and data: https://chiaraplizz.github.io/what-can-a-cook/.

* 21 pages, 10 figure, 11 tables. Project page: https://chiaraplizz.github.io/what-can-a-cook/ 
Viaarxiv icon

Perception Test: A Diagnostic Benchmark for Multimodal Video Models

May 23, 2023
Viorica Pătrăucean, Lucas Smaira, Ankush Gupta, Adrià Recasens Continente, Larisa Markeeva, Dylan Banarse, Skanda Koppula, Joseph Heyward, Mateusz Malinowski, Yi Yang, Carl Doersch, Tatiana Matejovicova, Yury Sulsky, Antoine Miech, Alex Frechette, Hanna Klimczak, Raphael Koster, Junlin Zhang, Stephanie Winkler, Yusuf Aytar, Simon Osindero, Dima Damen, Andrew Zisserman, João Carreira

Figure 1 for Perception Test: A Diagnostic Benchmark for Multimodal Video Models
Figure 2 for Perception Test: A Diagnostic Benchmark for Multimodal Video Models
Figure 3 for Perception Test: A Diagnostic Benchmark for Multimodal Video Models
Figure 4 for Perception Test: A Diagnostic Benchmark for Multimodal Video Models

We propose a novel multimodal video benchmark - the Perception Test - to evaluate the perception and reasoning skills of pre-trained multimodal models (e.g. Flamingo, BEiT-3, or GPT-4). Compared to existing benchmarks that focus on computational tasks (e.g. classification, detection or tracking), the Perception Test focuses on skills (Memory, Abstraction, Physics, Semantics) and types of reasoning (descriptive, explanatory, predictive, counterfactual) across video, audio, and text modalities, to provide a comprehensive and efficient evaluation tool. The benchmark probes pre-trained models for their transfer capabilities, in a zero-shot / few-shot or limited finetuning regime. For these purposes, the Perception Test introduces 11.6k real-world videos, 23s average length, designed to show perceptually interesting situations, filmed by around 100 participants worldwide. The videos are densely annotated with six types of labels (multiple-choice and grounded video question-answers, object and point tracks, temporal action and sound segments), enabling both language and non-language evaluations. The fine-tuning and validation splits of the benchmark are publicly available (CC-BY license), in addition to a challenge server with a held-out test split. Human baseline results compared to state-of-the-art video QA models show a significant gap in performance (91.4% vs 43.6%), suggesting that there is significant room for improvement in multimodal video understanding. Dataset, baselines code, and challenge server are available at https://github.com/deepmind/perception_test

* 25 pages, 11 figures 
Viaarxiv icon

Use Your Head: Improving Long-Tail Video Recognition

Apr 03, 2023
Toby Perrett, Saptarshi Sinha, Tilo Burghardt, Majid Mirmehdi, Dima Damen

Figure 1 for Use Your Head: Improving Long-Tail Video Recognition
Figure 2 for Use Your Head: Improving Long-Tail Video Recognition
Figure 3 for Use Your Head: Improving Long-Tail Video Recognition
Figure 4 for Use Your Head: Improving Long-Tail Video Recognition

This paper presents an investigation into long-tail video recognition. We demonstrate that, unlike naturally-collected video datasets and existing long-tail image benchmarks, current video benchmarks fall short on multiple long-tailed properties. Most critically, they lack few-shot classes in their tails. In response, we propose new video benchmarks that better assess long-tail recognition, by sampling subsets from two datasets: SSv2 and VideoLT. We then propose a method, Long-Tail Mixed Reconstruction, which reduces overfitting to instances from few-shot classes by reconstructing them as weighted combinations of samples from head classes. LMR then employs label mixing to learn robust decision boundaries. It achieves state-of-the-art average class accuracy on EPIC-KITCHENS and the proposed SSv2-LT and VideoLT-LT. Benchmarks and code at: tobyperrett.github.io/lmr

* CVPR 2023 
Viaarxiv icon

Epic-Sounds: A Large-scale Dataset of Actions That Sound

Feb 01, 2023
Jaesung Huh, Jacob Chalk, Evangelos Kazakos, Dima Damen, Andrew Zisserman

Figure 1 for Epic-Sounds: A Large-scale Dataset of Actions That Sound
Figure 2 for Epic-Sounds: A Large-scale Dataset of Actions That Sound
Figure 3 for Epic-Sounds: A Large-scale Dataset of Actions That Sound
Figure 4 for Epic-Sounds: A Large-scale Dataset of Actions That Sound

We introduce EPIC-SOUNDS, a large-scale dataset of audio annotations capturing temporal extents and class labels within the audio stream of the egocentric videos. We propose an annotation pipeline where annotators temporally label distinguishable audio segments and describe the action that could have caused this sound. We identify actions that can be discriminated purely from audio, through grouping these free-form descriptions of audio into classes. For actions that involve objects colliding, we collect human annotations of the materials of these objects (e.g. a glass object being placed on a wooden surface), which we verify from visual labels, discarding ambiguities. Overall, EPIC-SOUNDS includes 78.4k categorised segments of audible events and actions, distributed across 44 classes as well as 39.2k non-categorised segments. We train and evaluate two state-of-the-art audio recognition models on our dataset, highlighting the importance of audio-only labels and the limitations of current models to recognise actions that sound.

* 6 pages, 4 figures 
Viaarxiv icon

Refining Action Boundaries for One-stage Detection

Oct 25, 2022
Hanyuan Wang, Majid Mirmehdi, Dima Damen, Toby Perrett

Figure 1 for Refining Action Boundaries for One-stage Detection
Figure 2 for Refining Action Boundaries for One-stage Detection
Figure 3 for Refining Action Boundaries for One-stage Detection
Figure 4 for Refining Action Boundaries for One-stage Detection

Current one-stage action detection methods, which simultaneously predict action boundaries and the corresponding class, do not estimate or use a measure of confidence in their boundary predictions, which can lead to inaccurate boundaries. We incorporate the estimation of boundary confidence into one-stage anchor-free detection, through an additional prediction head that predicts the refined boundaries with higher confidence. We obtain state-of-the-art performance on the challenging EPIC-KITCHENS-100 action detection as well as the standard THUMOS14 action detection benchmarks, and achieve improvement on the ActivityNet-1.3 benchmark.

* Accepted to AVSS 2022. Our code is available at https://github.com/hanielwang/Refining_Boundary_Head.git 
Viaarxiv icon

Play It Back: Iterative Attention for Audio Recognition

Oct 20, 2022
Alexandros Stergiou, Dima Damen

Figure 1 for Play It Back: Iterative Attention for Audio Recognition
Figure 2 for Play It Back: Iterative Attention for Audio Recognition
Figure 3 for Play It Back: Iterative Attention for Audio Recognition
Figure 4 for Play It Back: Iterative Attention for Audio Recognition

A key function of auditory cognition is the association of characteristic sounds with their corresponding semantics over time. Humans attempting to discriminate between fine-grained audio categories, often replay the same discriminative sounds to increase their prediction confidence. We propose an end-to-end attention-based architecture that through selective repetition attends over the most discriminative sounds across the audio sequence. Our model initially uses the full audio sequence and iteratively refines the temporal segments replayed based on slot attention. At each playback, the selected segments are replayed using a smaller hop length which represents higher resolution features within these segments. We show that our method can consistently achieve state-of-the-art performance across three audio-classification benchmarks: AudioSet, VGG-Sound, and EPIC-KITCHENS-100.

Viaarxiv icon

ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval

Oct 09, 2022
Adriano Fragomeni, Michael Wray, Dima Damen

Figure 1 for ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval
Figure 2 for ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval
Figure 3 for ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval
Figure 4 for ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval

In this paper, we re-examine the task of cross-modal clip-sentence retrieval, where the clip is part of a longer untrimmed video. When the clip is short or visually ambiguous, knowledge of its local temporal context (i.e. surrounding video segments) can be used to improve the retrieval performance. We propose Context Transformer (ConTra); an encoder architecture that models the interaction between a video clip and its local temporal context in order to enhance its embedded representations. Importantly, we supervise the context transformer using contrastive losses in the cross-modal embedding space. We explore context transformers for video and text modalities. Results consistently demonstrate improved performance on three datasets: YouCook2, EPIC-KITCHENS and a clip-sentence version of ActivityNet Captions. Exhaustive ablation studies and context analysis show the efficacy of the proposed method.

* Accepted in ACCV 2022 
Viaarxiv icon

EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations

Sep 26, 2022
Ahmad Darkhalil, Dandan Shan, Bin Zhu, Jian Ma, Amlan Kar, Richard Higgins, Sanja Fidler, David Fouhey, Dima Damen

Figure 1 for EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations
Figure 2 for EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations
Figure 3 for EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations
Figure 4 for EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations

We introduce VISOR, a new dataset of pixel annotations and a benchmark suite for segmenting hands and active objects in egocentric video. VISOR annotates videos from EPIC-KITCHENS, which comes with a new set of challenges not encountered in current video segmentation datasets. Specifically, we need to ensure both short- and long-term consistency of pixel-level annotations as objects undergo transformative interactions, e.g. an onion is peeled, diced and cooked - where we aim to obtain accurate pixel-level annotations of the peel, onion pieces, chopping board, knife, pan, as well as the acting hands. VISOR introduces an annotation pipeline, AI-powered in parts, for scalability and quality. In total, we publicly release 272K manual semantic masks of 257 object classes, 9.9M interpolated dense masks, 67K hand-object relations, covering 36 hours of 179 untrimmed videos. Along with the annotations, we introduce three challenges in video object segmentation, interaction understanding and long-term reasoning. For data, code and leaderboards: http://epic-kitchens.github.io/VISOR

* 10 pages main, 38 pages appendix. Accepted at NeurIPS 2022 Track on Datasets and Benchmarks Data, code and leaderboards from: http://epic-kitchens.github.io/VISOR 
Viaarxiv icon