Picture for Petros Maragos

Petros Maragos

Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos

Add code
Jul 22, 2022
Figure 1 for Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos
Figure 2 for Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos
Figure 3 for Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos
Figure 4 for Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos
Viaarxiv icon

Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain Adaptation

Add code
Feb 20, 2022
Figure 1 for Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain Adaptation
Figure 2 for Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain Adaptation
Figure 3 for Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain Adaptation
Viaarxiv icon

Neural Emotion Director: Speech-preserving semantic control of facial expressions in "in-the-wild" videos

Add code
Dec 01, 2021
Figure 1 for Neural Emotion Director: Speech-preserving semantic control of facial expressions in "in-the-wild" videos
Figure 2 for Neural Emotion Director: Speech-preserving semantic control of facial expressions in "in-the-wild" videos
Figure 3 for Neural Emotion Director: Speech-preserving semantic control of facial expressions in "in-the-wild" videos
Figure 4 for Neural Emotion Director: Speech-preserving semantic control of facial expressions in "in-the-wild" videos
Viaarxiv icon

An audiovisual and contextual approach for categorical and continuous emotion recognition in-the-wild

Add code
Jul 10, 2021
Figure 1 for An audiovisual and contextual approach for categorical and continuous emotion recognition in-the-wild
Figure 2 for An audiovisual and contextual approach for categorical and continuous emotion recognition in-the-wild
Figure 3 for An audiovisual and contextual approach for categorical and continuous emotion recognition in-the-wild
Viaarxiv icon

Exploring Temporal Context and Human Movement Dynamics for Online Action Detection in Videos

Add code
Jun 26, 2021
Figure 1 for Exploring Temporal Context and Human Movement Dynamics for Online Action Detection in Videos
Figure 2 for Exploring Temporal Context and Human Movement Dynamics for Online Action Detection in Videos
Figure 3 for Exploring Temporal Context and Human Movement Dynamics for Online Action Detection in Videos
Figure 4 for Exploring Temporal Context and Human Movement Dynamics for Online Action Detection in Videos
Viaarxiv icon

Exploiting Emotional Dependencies with Graph Convolutional Networks for Facial Expression Recognition

Add code
Jun 07, 2021
Figure 1 for Exploiting Emotional Dependencies with Graph Convolutional Networks for Facial Expression Recognition
Figure 2 for Exploiting Emotional Dependencies with Graph Convolutional Networks for Facial Expression Recognition
Figure 3 for Exploiting Emotional Dependencies with Graph Convolutional Networks for Facial Expression Recognition
Figure 4 for Exploiting Emotional Dependencies with Graph Convolutional Networks for Facial Expression Recognition
Viaarxiv icon

Leveraging Semantic Scene Characteristics and Multi-Stream Convolutional Architectures in a Contextual Approach for Video-Based Visual Emotion Recognition in the Wild

Add code
May 16, 2021
Figure 1 for Leveraging Semantic Scene Characteristics and Multi-Stream Convolutional Architectures in a Contextual Approach for Video-Based Visual Emotion Recognition in the Wild
Figure 2 for Leveraging Semantic Scene Characteristics and Multi-Stream Convolutional Architectures in a Contextual Approach for Video-Based Visual Emotion Recognition in the Wild
Figure 3 for Leveraging Semantic Scene Characteristics and Multi-Stream Convolutional Architectures in a Contextual Approach for Video-Based Visual Emotion Recognition in the Wild
Figure 4 for Leveraging Semantic Scene Characteristics and Multi-Stream Convolutional Architectures in a Contextual Approach for Video-Based Visual Emotion Recognition in the Wild
Viaarxiv icon

HTMD-Net: A Hybrid Masking-Denoising Approach to Time-Domain Monaural Singing Voice Separation

Add code
Mar 07, 2021
Figure 1 for HTMD-Net: A Hybrid Masking-Denoising Approach to Time-Domain Monaural Singing Voice Separation
Figure 2 for HTMD-Net: A Hybrid Masking-Denoising Approach to Time-Domain Monaural Singing Voice Separation
Figure 3 for HTMD-Net: A Hybrid Masking-Denoising Approach to Time-Domain Monaural Singing Voice Separation
Figure 4 for HTMD-Net: A Hybrid Masking-Denoising Approach to Time-Domain Monaural Singing Voice Separation
Viaarxiv icon

Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms

Add code
Feb 13, 2021
Figure 1 for Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms
Figure 2 for Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms
Figure 3 for Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms
Figure 4 for Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms
Viaarxiv icon

Enhancing Handwritten Text Recognition with N-gram sequence decomposition and Multitask Learning

Add code
Dec 28, 2020
Figure 1 for Enhancing Handwritten Text Recognition with N-gram sequence decomposition and Multitask Learning
Figure 2 for Enhancing Handwritten Text Recognition with N-gram sequence decomposition and Multitask Learning
Figure 3 for Enhancing Handwritten Text Recognition with N-gram sequence decomposition and Multitask Learning
Figure 4 for Enhancing Handwritten Text Recognition with N-gram sequence decomposition and Multitask Learning
Viaarxiv icon