Alert button
Picture for Shrikanth Narayanan

Shrikanth Narayanan

Alert button

Signal Analysis and Interpretation Lab, University of Southern California, Information Science Institute, University of Southern California

Signal Processing Grand Challenge 2023 -- e-Prevention: Sleep Behavior as an Indicator of Relapses in Psychotic Patients

Add code
Bookmark button
Alert button
Apr 17, 2023
Kleanthis Avramidis, Kranti Adsul, Digbalay Bose, Shrikanth Narayanan

Figure 1 for Signal Processing Grand Challenge 2023 -- e-Prevention: Sleep Behavior as an Indicator of Relapses in Psychotic Patients
Viaarxiv icon

Designing and Evaluating Speech Emotion Recognition Systems: A reality check case study with IEMOCAP

Add code
Bookmark button
Alert button
Apr 03, 2023
Nikolaos Antoniou, Athanasios Katsamanis, Theodoros Giannakopoulos, Shrikanth Narayanan

Figure 1 for Designing and Evaluating Speech Emotion Recognition Systems: A reality check case study with IEMOCAP
Figure 2 for Designing and Evaluating Speech Emotion Recognition Systems: A reality check case study with IEMOCAP
Figure 3 for Designing and Evaluating Speech Emotion Recognition Systems: A reality check case study with IEMOCAP
Viaarxiv icon

Contextually-rich human affect perception using multimodal scene information

Add code
Bookmark button
Alert button
Mar 13, 2023
Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Shrikanth Narayanan

Figure 1 for Contextually-rich human affect perception using multimodal scene information
Figure 2 for Contextually-rich human affect perception using multimodal scene information
Figure 3 for Contextually-rich human affect perception using multimodal scene information
Figure 4 for Contextually-rich human affect perception using multimodal scene information
Viaarxiv icon

A dataset for Audio-Visual Sound Event Detection in Movies

Add code
Bookmark button
Alert button
Feb 14, 2023
Rajat Hebbar, Digbalay Bose, Krishna Somandepalli, Veena Vijai, Shrikanth Narayanan

Figure 1 for A dataset for Audio-Visual Sound Event Detection in Movies
Figure 2 for A dataset for Audio-Visual Sound Event Detection in Movies
Figure 3 for A dataset for Audio-Visual Sound Event Detection in Movies
Figure 4 for A dataset for Audio-Visual Sound Event Detection in Movies
Viaarxiv icon

Exploring Workplace Behaviors through Speaking Patterns using Large-scale Multimodal Wearable Recordings: A Study of Healthcare Providers

Add code
Bookmark button
Alert button
Dec 18, 2022
Tiantian Feng, Shrikanth Narayanan

Figure 1 for Exploring Workplace Behaviors through Speaking Patterns using Large-scale Multimodal Wearable Recordings: A Study of Healthcare Providers
Figure 2 for Exploring Workplace Behaviors through Speaking Patterns using Large-scale Multimodal Wearable Recordings: A Study of Healthcare Providers
Figure 3 for Exploring Workplace Behaviors through Speaking Patterns using Large-scale Multimodal Wearable Recordings: A Study of Healthcare Providers
Figure 4 for Exploring Workplace Behaviors through Speaking Patterns using Large-scale Multimodal Wearable Recordings: A Study of Healthcare Providers
Viaarxiv icon

Audio-Visual Activity Guided Cross-Modal Identity Association for Active Speaker Detection

Add code
Bookmark button
Alert button
Dec 01, 2022
Rahul Sharma, Shrikanth Narayanan

Figure 1 for Audio-Visual Activity Guided Cross-Modal Identity Association for Active Speaker Detection
Figure 2 for Audio-Visual Activity Guided Cross-Modal Identity Association for Active Speaker Detection
Figure 3 for Audio-Visual Activity Guided Cross-Modal Identity Association for Active Speaker Detection
Figure 4 for Audio-Visual Activity Guided Cross-Modal Identity Association for Active Speaker Detection
Viaarxiv icon

Can Knowledge of End-to-End Text-to-Speech Models Improve Neural MIDI-to-Audio Synthesis Systems?

Add code
Bookmark button
Alert button
Nov 25, 2022
Xuan Shi, Erica Cooper, Xin Wang, Junichi Yamagishi, Shrikanth Narayanan

Figure 1 for Can Knowledge of End-to-End Text-to-Speech Models Improve Neural MIDI-to-Audio Synthesis Systems?
Figure 2 for Can Knowledge of End-to-End Text-to-Speech Models Improve Neural MIDI-to-Audio Synthesis Systems?
Figure 3 for Can Knowledge of End-to-End Text-to-Speech Models Improve Neural MIDI-to-Audio Synthesis Systems?
Figure 4 for Can Knowledge of End-to-End Text-to-Speech Models Improve Neural MIDI-to-Audio Synthesis Systems?
Viaarxiv icon

A Context-Aware Computational Approach for Measuring Vocal Entrainment in Dyadic Conversations

Add code
Bookmark button
Alert button
Nov 07, 2022
Rimita Lahiri, Md Nasir, Catherine Lord, So Hyun Kim, Shrikanth Narayanan

Figure 1 for A Context-Aware Computational Approach for Measuring Vocal Entrainment in Dyadic Conversations
Figure 2 for A Context-Aware Computational Approach for Measuring Vocal Entrainment in Dyadic Conversations
Figure 3 for A Context-Aware Computational Approach for Measuring Vocal Entrainment in Dyadic Conversations
Figure 4 for A Context-Aware Computational Approach for Measuring Vocal Entrainment in Dyadic Conversations
Viaarxiv icon

Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats

Add code
Bookmark button
Alert button
Oct 31, 2022
Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan

Figure 1 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Figure 2 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Figure 3 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Figure 4 for Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats
Viaarxiv icon

Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion

Add code
Bookmark button
Alert button
Oct 28, 2022
Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan

Figure 1 for Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion
Figure 2 for Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion
Figure 3 for Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion
Viaarxiv icon