Alert button
Picture for Vicky Kalogeiton

Vicky Kalogeiton

Alert button

Multiple Style Transfer via Variational AutoEncoder

Oct 13, 2021
Zhi-Song Liu, Vicky Kalogeiton, Marie-Paule Cani

Figure 1 for Multiple Style Transfer via Variational AutoEncoder
Figure 2 for Multiple Style Transfer via Variational AutoEncoder
Figure 3 for Multiple Style Transfer via Variational AutoEncoder
Figure 4 for Multiple Style Transfer via Variational AutoEncoder

Modern works on style transfer focus on transferring style from a single image. Recently, some approaches study multiple style transfer; these, however, are either too slow or fail to mix multiple styles. We propose ST-VAE, a Variational AutoEncoder for latent space-based style transfer. It performs multiple style transfer by projecting nonlinear styles to a linear latent space, enabling to merge styles via linear interpolation before transferring the new style to the content image. To evaluate ST-VAE, we experiment on COCO for single and multiple style transfer. We also present a case study revealing that ST-VAE outperforms other methods while being faster, flexible, and setting a new path for multiple style transfer.

* 5 papges, 4 figures 
Viaarxiv icon

Face, Body, Voice: Video Person-Clustering with Multiple Modalities

May 20, 2021
Andrew Brown, Vicky Kalogeiton, Andrew Zisserman

Figure 1 for Face, Body, Voice: Video Person-Clustering with Multiple Modalities
Figure 2 for Face, Body, Voice: Video Person-Clustering with Multiple Modalities
Figure 3 for Face, Body, Voice: Video Person-Clustering with Multiple Modalities
Figure 4 for Face, Body, Voice: Video Person-Clustering with Multiple Modalities

The objective of this work is person-clustering in videos -- grouping characters according to their identity. Previous methods focus on the narrower task of face-clustering, and for the most part ignore other cues such as the person's voice, their overall appearance (hair, clothes, posture), and the editing structure of the videos. Similarly, most current datasets evaluate only the task of face-clustering, rather than person-clustering. This limits their applicability to downstream applications such as story understanding which require person-level, rather than only face-level, reasoning. In this paper we make contributions to address both these deficiencies: first, we introduce a Multi-Modal High-Precision Clustering algorithm for person-clustering in videos using cues from several modalities (face, body, and voice). Second, we introduce a Video Person-Clustering dataset, for evaluating multi-modal person-clustering. It contains body-tracks for each annotated character, face-tracks when visible, and voice-tracks when speaking, with their associated features. The dataset is by far the largest of its kind, and covers films and TV-shows representing a wide range of demographics. Finally, we show the effectiveness of using multiple modalities for person-clustering, explore the use of this new broad task for story understanding through character co-occurrences, and achieve a new state of the art on all available datasets for face and person-clustering.

Viaarxiv icon

LAEO-Net++: revisiting people Looking At Each Other in videos

Jan 06, 2021
Manuel J. Marin-Jimenez, Vicky Kalogeiton, Pablo Medina-Suarez, Andrew Zisserman

Figure 1 for LAEO-Net++: revisiting people Looking At Each Other in videos
Figure 2 for LAEO-Net++: revisiting people Looking At Each Other in videos
Figure 3 for LAEO-Net++: revisiting people Looking At Each Other in videos
Figure 4 for LAEO-Net++: revisiting people Looking At Each Other in videos

Capturing the 'mutual gaze' of people is essential for understanding and interpreting the social interactions between them. To this end, this paper addresses the problem of detecting people Looking At Each Other (LAEO) in video sequences. For this purpose, we propose LAEO-Net++, a new deep CNN for determining LAEO in videos. In contrast to previous works, LAEO-Net++ takes spatio-temporal tracks as input and reasons about the whole track. It consists of three branches, one for each character's tracked head and one for their relative position. Moreover, we introduce two new LAEO datasets: UCO-LAEO and AVA-LAEO. A thorough experimental evaluation demonstrates the ability of LAEO-Net++ to successfully determine if two people are LAEO and the temporal window where it happens. Our model achieves state-of-the-art results on the existing TVHID-LAEO video dataset, significantly outperforming previous approaches. Finally, we apply LAEO-Net++ to a social network, where we automatically infer the social relationship between pairs of people based on the frequency and duration that they LAEO, and show that LAEO can be a useful tool for guided search of human interactions in videos. The code is available at https://github.com/AVAuco/laeonetplus.

* IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020  
* 16 pages, 16 Figures. arXiv admin note: substantial text overlap with arXiv:1906.05261 
Viaarxiv icon

Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval

Jul 23, 2020
Andrew Brown, Weidi Xie, Vicky Kalogeiton, Andrew Zisserman

Figure 1 for Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval
Figure 2 for Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval
Figure 3 for Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval
Figure 4 for Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval

Optimising a ranking-based metric, such as Average Precision (AP), is notoriously challenging due to the fact that it is non-differentiable, and hence cannot be optimised directly using gradient-descent methods. To this end, we introduce an objective that optimises instead a smoothed approximation of AP, coined Smooth-AP. Smooth-AP is a plug-and-play objective function that allows for end-to-end training of deep networks with a simple and elegant implementation. We also present an analysis for why directly optimising the ranking based metric of AP offers benefits over other deep metric learning losses. We apply Smooth-AP to standard retrieval benchmarks: Stanford Online products and VehicleID, and also evaluate on larger-scale datasets: INaturalist for fine-grained category retrieval, and VGGFace2 and IJB-C for face retrieval. In all cases, we improve the performance over the state-of-the-art, especially for larger-scale datasets, thus demonstrating the effectiveness and scalability of Smooth-AP to real-world scenarios.

* Accepted at ECCV 2020 
Viaarxiv icon

LAEO-Net: revisiting people Looking At Each Other in videos

Jun 12, 2019
Manuel J. Marin-Jimenez, Vicky Kalogeiton, Pablo Medina-Suarez, Andrew Zisserman

Figure 1 for LAEO-Net: revisiting people Looking At Each Other in videos
Figure 2 for LAEO-Net: revisiting people Looking At Each Other in videos
Figure 3 for LAEO-Net: revisiting people Looking At Each Other in videos
Figure 4 for LAEO-Net: revisiting people Looking At Each Other in videos

Capturing the `mutual gaze' of people is essential for understanding and interpreting the social interactions between them. To this end, this paper addresses the problem of detecting people Looking At Each Other (LAEO) in video sequences. For this purpose, we propose LAEO-Net, a new deep CNN for determining LAEO in videos. In contrast to previous works, LAEO-Net takes spatio-temporal tracks as input and reasons about the whole track. It consists of three branches, one for each character's tracked head and one for their relative position. Moreover, we introduce two new LAEO datasets: UCO-LAEO and AVA-LAEO. A thorough experimental evaluation demonstrates the ability of LAEONet to successfully determine if two people are LAEO and the temporal window where it happens. Our model achieves state-of-the-art results on the existing TVHID-LAEO video dataset, significantly outperforming previous approaches. Finally, we apply LAEO-Net to social network analysis, where we automatically infer the social relationship between pairs of people based on the frequency and duration that they LAEO.

* CVPR 2019 
Viaarxiv icon

Action Tubelet Detector for Spatio-Temporal Action Localization

Aug 21, 2017
Vicky Kalogeiton, Philippe Weinzaepfel, Vittorio Ferrari, Cordelia Schmid

Figure 1 for Action Tubelet Detector for Spatio-Temporal Action Localization
Figure 2 for Action Tubelet Detector for Spatio-Temporal Action Localization
Figure 3 for Action Tubelet Detector for Spatio-Temporal Action Localization
Figure 4 for Action Tubelet Detector for Spatio-Temporal Action Localization

Current state-of-the-art approaches for spatio-temporal action localization rely on detections at the frame level that are then linked or tracked across time. In this paper, we leverage the temporal continuity of videos instead of operating at the frame level. We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i.e., sequences of bounding boxes with associated scores. The same way state-of-the-art object detectors rely on anchor boxes, our ACT-detector is based on anchor cuboids. We build upon the SSD framework. Convolutional features are extracted for each frame, while scores and regressions are based on the temporal stacking of these features, thus exploiting information from a sequence. Our experimental results show that leveraging sequences of frames significantly improves detection performance over using individual frames. The gain of our tubelet detector can be explained by both more accurate scores and more precise localization. Our ACT-detector outperforms the state-of-the-art methods for frame-mAP and video-mAP on the J-HMDB and UCF-101 datasets, in particular at high overlap thresholds.

* 9 pages 
Viaarxiv icon

Analysing domain shift factors between videos and images for object detection

Jan 27, 2016
Vicky Kalogeiton, Vittorio Ferrari, Cordelia Schmid

Figure 1 for Analysing domain shift factors between videos and images for object detection
Figure 2 for Analysing domain shift factors between videos and images for object detection
Figure 3 for Analysing domain shift factors between videos and images for object detection
Figure 4 for Analysing domain shift factors between videos and images for object detection

Object detection is one of the most important challenges in computer vision. Object detectors are usually trained on bounding-boxes from still images. Recently, video has been used as an alternative source of data. Yet, for a given test domain (image or video), the performance of the detector depends on the domain it was trained on. In this paper, we examine the reasons behind this performance gap. We define and evaluate different domain shift factors: spatial location accuracy, appearance diversity, image quality and aspect distribution. We examine the impact of these factors by comparing performance before and after factoring them out. The results show that all four factors affect the performance of the detectors and their combined effect explains nearly the whole performance gap.

* 8 pages 
Viaarxiv icon