Picture for Uttaran Bhattacharya

Uttaran Bhattacharya

HighlightMe: Detecting Highlights from Human-Centric Videos

Add code
Oct 05, 2021
Figure 1 for HighlightMe: Detecting Highlights from Human-Centric Videos
Figure 2 for HighlightMe: Detecting Highlights from Human-Centric Videos
Figure 3 for HighlightMe: Detecting Highlights from Human-Centric Videos
Figure 4 for HighlightMe: Detecting Highlights from Human-Centric Videos
Viaarxiv icon

Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning

Add code
Aug 03, 2021
Figure 1 for Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning
Figure 2 for Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning
Figure 3 for Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning
Figure 4 for Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning
Viaarxiv icon

Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders

Add code
Sep 18, 2020
Figure 1 for Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders
Figure 2 for Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders
Figure 3 for Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders
Figure 4 for Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders
Viaarxiv icon

Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues

Add code
Mar 17, 2020
Figure 1 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 2 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 3 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 4 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Viaarxiv icon

EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle

Add code
Mar 14, 2020
Figure 1 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 2 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 3 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 4 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Viaarxiv icon

CMetric: A Driving Behavior Measure Using Centrality Functions

Add code
Mar 09, 2020
Figure 1 for CMetric: A Driving Behavior Measure Using Centrality Functions
Figure 2 for CMetric: A Driving Behavior Measure Using Centrality Functions
Figure 3 for CMetric: A Driving Behavior Measure Using Centrality Functions
Figure 4 for CMetric: A Driving Behavior Measure Using Centrality Functions
Viaarxiv icon

The Liar's Walk: Detecting Deception with Gait and Gesture

Add code
Dec 20, 2019
Figure 1 for The Liar's Walk: Detecting Deception with Gait and Gesture
Figure 2 for The Liar's Walk: Detecting Deception with Gait and Gesture
Figure 3 for The Liar's Walk: Detecting Deception with Gait and Gesture
Figure 4 for The Liar's Walk: Detecting Deception with Gait and Gesture
Viaarxiv icon

Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs

Add code
Dec 02, 2019
Figure 1 for Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs
Figure 2 for Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs
Figure 3 for Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs
Figure 4 for Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs
Viaarxiv icon

M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues

Add code
Nov 22, 2019
Figure 1 for M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues
Figure 2 for M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues
Figure 3 for M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues
Figure 4 for M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues
Viaarxiv icon

Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping

Add code
Nov 20, 2019
Figure 1 for Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping
Figure 2 for Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping
Figure 3 for Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping
Figure 4 for Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping
Viaarxiv icon