Alert button
Picture for Gloria Haro

Gloria Haro

Alert button

Speech inpainting: Context-based speech synthesis guided by video

Jun 01, 2023
Juan F. Montesinos, Daniel Michelsanti, Gloria Haro, Zheng-Hua Tan, Jesper Jensen

Figure 1 for Speech inpainting: Context-based speech synthesis guided by video
Figure 2 for Speech inpainting: Context-based speech synthesis guided by video
Figure 3 for Speech inpainting: Context-based speech synthesis guided by video
Figure 4 for Speech inpainting: Context-based speech synthesis guided by video

Audio and visual modalities are inherently connected in speech signals: lip movements and facial expressions are correlated with speech sounds. This motivates studies that incorporate the visual modality to enhance an acoustic speech signal or even restore missing audio information. Specifically, this paper focuses on the problem of audio-visual speech inpainting, which is the task of synthesizing the speech in a corrupted audio segment in a way that it is consistent with the corresponding visual content and the uncorrupted audio context. We present an audio-visual transformer-based deep learning model that leverages visual cues that provide information about the content of the corrupted audio. It outperforms the previous state-of-the-art audio-visual model and audio-only baselines. We also show how visual features extracted with AV-HuBERT, a large audio-visual transformer for speech recognition, are suitable for synthesizing speech.

* Accepted in Interspeech23 
Viaarxiv icon

A Graph-Based Method for Soccer Action Spotting Using Unsupervised Player Classification

Nov 22, 2022
Alejandro Cartas, Coloma Ballester, Gloria Haro

Figure 1 for A Graph-Based Method for Soccer Action Spotting Using Unsupervised Player Classification
Figure 2 for A Graph-Based Method for Soccer Action Spotting Using Unsupervised Player Classification
Figure 3 for A Graph-Based Method for Soccer Action Spotting Using Unsupervised Player Classification
Figure 4 for A Graph-Based Method for Soccer Action Spotting Using Unsupervised Player Classification

Action spotting in soccer videos is the task of identifying the specific time when a certain key action of the game occurs. Lately, it has received a large amount of attention and powerful methods have been introduced. Action spotting involves understanding the dynamics of the game, the complexity of events, and the variation of video sequences. Most approaches have focused on the latter, given that their models exploit the global visual features of the sequences. In this work, we focus on the former by (a) identifying and representing the players, referees, and goalkeepers as nodes in a graph, and by (b) modeling their temporal interactions as sequences of graphs. For the player identification, or player classification task, we obtain an accuracy of 97.72% in our annotated benchmark. For the action spotting task, our method obtains an overall performance of 57.83% average-mAP by combining it with other audiovisual modalities. This performance surpasses similar graph-based methods and has competitive results with heavy computing methods. Code and data are available at https://github.com/IPCV/soccer_action_spotting.

* Accepted at the 5th International ACM Workshop on Multimedia Content Analysis in Sports (MMSports 2022) 
Viaarxiv icon

VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices

Apr 05, 2022
Venkatesh S. Kadandale, Juan F. Montesinos, Gloria Haro

Figure 1 for VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices
Figure 2 for VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices
Figure 3 for VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices
Figure 4 for VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices

In this paper, we address the problem of lip-voice synchronisation in videos containing human face and voice. Our approach is based on determining if the lips motion and the voice in a video are synchronised or not, depending on their audio-visual correspondence score. We propose an audio-visual cross-modal transformer-based model that outperforms several baseline models in the audio-visual synchronisation task on the standard lip-reading speech benchmark dataset LRS2. While the existing methods focus mainly on the lip synchronisation in speech videos, we also consider the special case of singing voice. Singing voice is a more challenging use case for synchronisation due to sustained vowel sounds. We also investigate the relevance of lip synchronisation models trained on speech datasets in the context of singing voice. Finally, we use the frozen visual features learned by our lip synchronisation model in the singing voice separation task to outperform a baseline audio-visual model which was trained end-to-end. The demos, source code, and the pre-trained model will be made available on https://ipcv.github.io/VocaLiST/

* Submitted to Interspeech 2022; Project Page: https://ipcv.github.io/VocaLiST/ 
Viaarxiv icon

VoViT: Low Latency Graph-based Audio-Visual Voice Separation Transformer

Mar 08, 2022
Juan F. Montesinos, Venkatesh S. Kadandale, Gloria Haro

Figure 1 for VoViT: Low Latency Graph-based Audio-Visual Voice Separation Transformer
Figure 2 for VoViT: Low Latency Graph-based Audio-Visual Voice Separation Transformer
Figure 3 for VoViT: Low Latency Graph-based Audio-Visual Voice Separation Transformer
Figure 4 for VoViT: Low Latency Graph-based Audio-Visual Voice Separation Transformer

This paper presents an audio-visual approach for voice separation which outperforms state-of-the-art methods at a low latency in two scenarios: speech and singing voice. The model is based on a two-stage network. Motion cues are obtained with a lightweight graph convolutional network that processes face landmarks. Then, both audio and motion features are fed to an audio-visual transformer which produces a fairly good estimation of the isolated target source. In a second stage, the predominant voice is enhanced with an audio-only network. We present different ablation studies and comparison to state-of-the-art methods. Finally, we explore the transferability of models trained for speech separation in the task of singing voice separation. The demos, code, and weights will be made publicly available at https://ipcv.github.io/VoViT/

Viaarxiv icon

Learning Football Body-Orientation as a Matter of Classification

Jun 01, 2021
Adrià Arbués-Sangüesa, Adrián Martín, Paulino Granero, Coloma Ballester, Gloria Haro

Figure 1 for Learning Football Body-Orientation as a Matter of Classification
Figure 2 for Learning Football Body-Orientation as a Matter of Classification
Figure 3 for Learning Football Body-Orientation as a Matter of Classification
Figure 4 for Learning Football Body-Orientation as a Matter of Classification

Orientation is a crucial skill for football players that becomes a differential factor in a large set of events, especially the ones involving passes. However, existing orientation estimation methods, which are based on computer-vision techniques, still have a lot of room for improvement. To the best of our knowledge, this article presents the first deep learning model for estimating orientation directly from video footage. By approaching this challenge as a classification problem where classes correspond to orientation bins, and by introducing a cyclic loss function, a well-known convolutional network is refined to provide player orientation data. The model is trained by using ground-truth orientation data obtained from wearable EPTS devices, which are individually compensated with respect to the perceived orientation in the current frame. The obtained results outperform previous methods; in particular, the absolute median error is less than 12 degrees per player. An ablation study is included in order to show the potential generalization to any kind of football video footage.

* Accepted in the AI for Sports Analytics Workshop at ICJAI 2021 
Viaarxiv icon

A cappella: Audio-visual Singing Voice Separation

Apr 20, 2021
Juan F. Montesinos, Venkatesh S. Kadandale, Gloria Haro

Figure 1 for A cappella: Audio-visual Singing Voice Separation
Figure 2 for A cappella: Audio-visual Singing Voice Separation
Figure 3 for A cappella: Audio-visual Singing Voice Separation
Figure 4 for A cappella: Audio-visual Singing Voice Separation

Music source separation can be interpreted as the estimation of the constituent music sources that a music clip is composed of. In this work, we explore the single-channel singing voice separation problem from a multimodal perspective, by jointly learning from audio and visual modalities. To do so, we present Acappella, a dataset spanning around 46 hours of a cappella solo singing videos sourced from YouTube. We propose Y-Net, an audio-visual convolutional neural network which achieves state-of-the-art singing voice separation results on the Acappella dataset and compare it against its audio-only counterpart, U-Net, and a state-of-the-art audio-visual speech separation model. Singing voice separation can be particularly challenging when the audio mixture also comprises of other accompaniment voices and background sounds along with the target voice of interest. We demonstrate that our model can outperform the baseline models in the singing voice separation task in such challenging scenarios. The code, the pre-trained models and the dataset will be publicly available at https://ipcv.github.io/Acappella/

* Submitted to Interspeech 2021 
Viaarxiv icon

Self-Supervised Small Soccer Player Detection and Tracking

Nov 20, 2020
Samuel Hurault, Coloma Ballester, Gloria Haro

Figure 1 for Self-Supervised Small Soccer Player Detection and Tracking
Figure 2 for Self-Supervised Small Soccer Player Detection and Tracking
Figure 3 for Self-Supervised Small Soccer Player Detection and Tracking
Figure 4 for Self-Supervised Small Soccer Player Detection and Tracking

In a soccer game, the information provided by detecting and tracking brings crucial clues to further analyze and understand some tactical aspects of the game, including individual and team actions. State-of-the-art tracking algorithms achieve impressive results in scenarios on which they have been trained for, but they fail in challenging ones such as soccer games. This is frequently due to the player small relative size and the similar appearance among players of the same team. Although a straightforward solution would be to retrain these models by using a more specific dataset, the lack of such publicly available annotated datasets entails searching for other effective solutions. In this work, we propose a self-supervised pipeline which is able to detect and track low-resolution soccer players under different recording conditions without any need of ground-truth data. Extensive quantitative and qualitative experimental results are presented evaluating its performance. We also present a comparison to several state-of-the-art methods showing that both the proposed detector and the proposed tracker achieve top-tier results, in particular in the presence of small players.

* Proceedings of the 3rd International Workshop on Multimedia Content Analysis in Sports (2020) 9-18  
* In Proceedings of the 3rd International Workshop on Multimedia Content Analysis in Sports (MMSports '20) 
Viaarxiv icon

Using Player's Body-Orientation to Model Pass Feasibility in Soccer

Apr 15, 2020
Adrià Arbués-Sangüesa, Adrián Martín, Javier Fernández, Coloma Ballester, Gloria Haro

Figure 1 for Using Player's Body-Orientation to Model Pass Feasibility in Soccer
Figure 2 for Using Player's Body-Orientation to Model Pass Feasibility in Soccer
Figure 3 for Using Player's Body-Orientation to Model Pass Feasibility in Soccer
Figure 4 for Using Player's Body-Orientation to Model Pass Feasibility in Soccer

Given a monocular video of a soccer match, this paper presents a computational model to estimate the most feasible pass at any given time. The method leverages offensive player's orientation (plus their location) and opponents' spatial configuration to compute the feasibility of pass events within players of the same team. Orientation data is gathered from body pose estimations that are properly projected onto the 2D game field; moreover, a geometrical solution is provided, through the definition of a feasibility measure, to determine which players are better oriented towards each other. Once analyzed more than 6000 pass events, results show that, by including orientation as a feasibility measure, a robust computational model can be built, reaching more than 0.7 Top-3 accuracy. Finally, the combination of the orientation feasibility measure with the recently introduced Expected Possession Value metric is studied; promising results are obtained, thus showing that existing models can be refined by using orientation as a key feature. These models could help both coaches and analysts to have a better understanding of the game and to improve the players' decision-making process.

* Accepted at the Computer Vision in Sports Workshop at CVPR 2020 
Viaarxiv icon

Vocoder-Based Speech Synthesis from Silent Videos

Apr 06, 2020
Daniel Michelsanti, Olga Slizovskaia, Gloria Haro, Emilia Gómez, Zheng-Hua Tan, Jesper Jensen

Figure 1 for Vocoder-Based Speech Synthesis from Silent Videos
Figure 2 for Vocoder-Based Speech Synthesis from Silent Videos
Figure 3 for Vocoder-Based Speech Synthesis from Silent Videos
Figure 4 for Vocoder-Based Speech Synthesis from Silent Videos

Both acoustic and visual information influence human perception of speech. For this reason, the lack of audio in a video sequence determines an extremely low speech intelligibility for untrained lip readers. In this paper, we present a way to synthesise speech from the silent video of a talker using deep learning. The system learns a mapping function from raw video frames to acoustic features and reconstructs the speech with a vocoder synthesis algorithm. To improve speech reconstruction performance, our model is also trained to predict text information in a multi-task learning fashion and it is able to simultaneously reconstruct and recognise speech in real time. The results in terms of estimated speech quality and intelligibility show the effectiveness of our method, which exhibits an improvement over existing video-to-speech approaches.

Viaarxiv icon

Multi-task U-Net for Music Source Separation

Mar 23, 2020
Venkatesh S. Kadandale, Juan F. Montesinos, Gloria Haro, Emilia Gómez

Figure 1 for Multi-task U-Net for Music Source Separation
Figure 2 for Multi-task U-Net for Music Source Separation
Figure 3 for Multi-task U-Net for Music Source Separation
Figure 4 for Multi-task U-Net for Music Source Separation

A fairly straightforward approach for music source separation is to train independent models, wherein each model is dedicated for estimating only a specific source. Training a single model to estimate multiple sources generally does not perform as well as the independent dedicated models. However, Conditioned U-Net (C-U-Net) uses a control mechanism to train a single model for multi-source separation and attempts to achieve a performance comparable to that of the dedicated models. We propose a multi-task U-Net (M-U-Net) trained using a weighted multi-task loss as an alternative to the C-U-Net. We investigate two weighting strategies for our multi-task loss: 1) Dynamic Weighted Average (DWA), and 2) Energy Based Weighting (EBW). DWA determines the weights by tracking the rate of change of loss of each task during training. EBW aims to neutralize the effect of the training bias arising from the difference in energy levels of each of the sources in a mixture. Our methods provide two-fold advantages compared to the C-U-Net: 1) Fewer effective training iterations with no conditioning, and 2) Fewer trainable network parameters (no control parameters). Our methods achieve performance comparable to that of C-U-Net and the dedicated U-Nets at a much lower training cost.

Viaarxiv icon