Alert button
Picture for Trisha Mittal

Trisha Mittal

Alert button

Naturalistic Head Motion Generation from Speech

Oct 26, 2022
Trisha Mittal, Zakaria Aldeneh, Masha Fedzechkina, Anurag Ranjan, Barry-John Theobald

Figure 1 for Naturalistic Head Motion Generation from Speech
Figure 2 for Naturalistic Head Motion Generation from Speech
Figure 3 for Naturalistic Head Motion Generation from Speech
Figure 4 for Naturalistic Head Motion Generation from Speech
Viaarxiv icon

Video Manipulations Beyond Faces: A Dataset with Human-Machine Analysis

Jul 27, 2022
Trisha Mittal, Ritwik Sinha, Viswanathan Swaminathan, John Collomosse, Dinesh Manocha

Figure 1 for Video Manipulations Beyond Faces: A Dataset with Human-Machine Analysis
Figure 2 for Video Manipulations Beyond Faces: A Dataset with Human-Machine Analysis
Figure 3 for Video Manipulations Beyond Faces: A Dataset with Human-Machine Analysis
Figure 4 for Video Manipulations Beyond Faces: A Dataset with Human-Machine Analysis
Viaarxiv icon

3MASSIV: Multilingual, Multimodal and Multi-Aspect dataset of Social Media Short Videos

Mar 28, 2022
Vikram Gupta, Trisha Mittal, Puneet Mathur, Vaibhav Mishra, Mayank Maheshwari, Aniket Bera, Debdoot Mukherjee, Dinesh Manocha

Figure 1 for 3MASSIV: Multilingual, Multimodal and Multi-Aspect dataset of Social Media Short Videos
Figure 2 for 3MASSIV: Multilingual, Multimodal and Multi-Aspect dataset of Social Media Short Videos
Figure 3 for 3MASSIV: Multilingual, Multimodal and Multi-Aspect dataset of Social Media Short Videos
Figure 4 for 3MASSIV: Multilingual, Multimodal and Multi-Aspect dataset of Social Media Short Videos
Viaarxiv icon

Affect2MM: Affective Analysis of Multimedia Content Using Emotion Causality

Mar 11, 2021
Trisha Mittal, Puneet Mathur, Aniket Bera, Dinesh Manocha

Figure 1 for Affect2MM: Affective Analysis of Multimedia Content Using Emotion Causality
Figure 2 for Affect2MM: Affective Analysis of Multimedia Content Using Emotion Causality
Figure 3 for Affect2MM: Affective Analysis of Multimedia Content Using Emotion Causality
Figure 4 for Affect2MM: Affective Analysis of Multimedia Content Using Emotion Causality
Viaarxiv icon

Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for Reading Task Identification

Feb 21, 2021
Puneet Mathur, Trisha Mittal, Dinesh Manocha

Figure 1 for Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for Reading Task Identification
Figure 2 for Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for Reading Task Identification
Figure 3 for Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for Reading Task Identification
Viaarxiv icon

MCQA: Multimodal Co-attention Based Network for Question Answering

Apr 25, 2020
Abhishek Kumar, Trisha Mittal, Dinesh Manocha

Figure 1 for MCQA: Multimodal Co-attention Based Network for Question Answering
Figure 2 for MCQA: Multimodal Co-attention Based Network for Question Answering
Figure 3 for MCQA: Multimodal Co-attention Based Network for Question Answering
Figure 4 for MCQA: Multimodal Co-attention Based Network for Question Answering
Viaarxiv icon

Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues

Mar 17, 2020
Trisha Mittal, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, Dinesh Manocha

Figure 1 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 2 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 3 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 4 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Viaarxiv icon

EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle

Mar 14, 2020
Trisha Mittal, Pooja Guhan, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, Dinesh Manocha

Figure 1 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 2 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 3 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 4 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Viaarxiv icon

CMetric: A Driving Behavior Measure Using Centrality Functions

Mar 09, 2020
Rohan Chandra, Uttaran Bhattacharya, Trisha Mittal, Aniket Bera, Dinesh Manocha

Figure 1 for CMetric: A Driving Behavior Measure Using Centrality Functions
Figure 2 for CMetric: A Driving Behavior Measure Using Centrality Functions
Figure 3 for CMetric: A Driving Behavior Measure Using Centrality Functions
Figure 4 for CMetric: A Driving Behavior Measure Using Centrality Functions
Viaarxiv icon