Alert button
Picture for Vikramjit Mitra

Vikramjit Mitra

Alert button

Investigating salient representations and label Variance in Dimensional Speech Emotion Analysis

Add code
Bookmark button
Alert button
Dec 17, 2023
Vikramjit Mitra, Jingping Nie, Erdrin Azemi

Viaarxiv icon

Pre-trained Model Representations and their Robustness against Noise for Speech Emotion Analysis

Add code
Bookmark button
Alert button
Mar 03, 2023
Vikramjit Mitra, Vasudha Kowtha, Hsiang-Yun Sherry Chien, Erdrin Azemi, Carlos Avendano

Figure 1 for Pre-trained Model Representations and their Robustness against Noise for Speech Emotion Analysis
Figure 2 for Pre-trained Model Representations and their Robustness against Noise for Speech Emotion Analysis
Figure 3 for Pre-trained Model Representations and their Robustness against Noise for Speech Emotion Analysis
Figure 4 for Pre-trained Model Representations and their Robustness against Noise for Speech Emotion Analysis
Viaarxiv icon

Speech Emotion: Investigating Model Representations, Multi-Task Learning and Knowledge Distillation

Add code
Bookmark button
Alert button
Jul 02, 2022
Vikramjit Mitra, Hsiang-Yun Sherry Chien, Vasudha Kowtha, Joseph Yitan Cheng, Erdrin Azemi

Figure 1 for Speech Emotion: Investigating Model Representations, Multi-Task Learning and Knowledge Distillation
Figure 2 for Speech Emotion: Investigating Model Representations, Multi-Task Learning and Knowledge Distillation
Figure 3 for Speech Emotion: Investigating Model Representations, Multi-Task Learning and Knowledge Distillation
Figure 4 for Speech Emotion: Investigating Model Representations, Multi-Task Learning and Knowledge Distillation
Viaarxiv icon

Estimating Respiratory Rate From Breath Audio Obtained Through Wearable Microphones

Add code
Bookmark button
Alert button
Jul 28, 2021
Agni Kumar, Vikramjit Mitra, Carolyn Oliver, Adeeti Ullal, Matt Biddulph, Irida Mance

Figure 1 for Estimating Respiratory Rate From Breath Audio Obtained Through Wearable Microphones
Figure 2 for Estimating Respiratory Rate From Breath Audio Obtained Through Wearable Microphones
Figure 3 for Estimating Respiratory Rate From Breath Audio Obtained Through Wearable Microphones
Figure 4 for Estimating Respiratory Rate From Breath Audio Obtained Through Wearable Microphones
Viaarxiv icon

Analysis and Tuning of a Voice Assistant System for Dysfluent Speech

Add code
Bookmark button
Alert button
Jun 18, 2021
Vikramjit Mitra, Zifang Huang, Colin Lea, Lauren Tooley, Sarah Wu, Darren Botten, Ashwini Palekar, Shrinath Thelapurath, Panayiotis Georgiou, Sachin Kajarekar, Jefferey Bigham

Figure 1 for Analysis and Tuning of a Voice Assistant System for Dysfluent Speech
Figure 2 for Analysis and Tuning of a Voice Assistant System for Dysfluent Speech
Figure 3 for Analysis and Tuning of a Voice Assistant System for Dysfluent Speech
Figure 4 for Analysis and Tuning of a Voice Assistant System for Dysfluent Speech
Viaarxiv icon

SEP-28k: A Dataset for Stuttering Event Detection From Podcasts With People Who Stutter

Add code
Bookmark button
Alert button
Feb 24, 2021
Colin Lea, Vikramjit Mitra, Aparna Joshi, Sachin Kajarekar, Jeffrey P. Bigham

Figure 1 for SEP-28k: A Dataset for Stuttering Event Detection From Podcasts With People Who Stutter
Figure 2 for SEP-28k: A Dataset for Stuttering Event Detection From Podcasts With People Who Stutter
Figure 3 for SEP-28k: A Dataset for Stuttering Event Detection From Podcasts With People Who Stutter
Figure 4 for SEP-28k: A Dataset for Stuttering Event Detection From Podcasts With People Who Stutter
Viaarxiv icon

Detecting Emotion Primitives from Speech and their use in discerning Categorical Emotions

Add code
Bookmark button
Alert button
Jan 31, 2020
Vasudha Kowtha, Vikramjit Mitra, Chris Bartels, Erik Marchi, Sue Booker, William Caruso, Sachin Kajarekar, Devang Naik

Figure 1 for Detecting Emotion Primitives from Speech and their use in discerning Categorical Emotions
Figure 2 for Detecting Emotion Primitives from Speech and their use in discerning Categorical Emotions
Figure 3 for Detecting Emotion Primitives from Speech and their use in discerning Categorical Emotions
Figure 4 for Detecting Emotion Primitives from Speech and their use in discerning Categorical Emotions
Viaarxiv icon

Investigation and Analysis of Hyper and Hypo neuron pruning to selectively update neurons during Unsupervised Adaptation

Add code
Bookmark button
Alert button
Jan 06, 2020
Vikramjit Mitra, Horacio Franco

Figure 1 for Investigation and Analysis of Hyper and Hypo neuron pruning to selectively update neurons during Unsupervised Adaptation
Figure 2 for Investigation and Analysis of Hyper and Hypo neuron pruning to selectively update neurons during Unsupervised Adaptation
Figure 3 for Investigation and Analysis of Hyper and Hypo neuron pruning to selectively update neurons during Unsupervised Adaptation
Figure 4 for Investigation and Analysis of Hyper and Hypo neuron pruning to selectively update neurons during Unsupervised Adaptation
Viaarxiv icon

Leveraging Acoustic Cues and Paralinguistic Embeddings to Detect Expression from Voice

Add code
Bookmark button
Alert button
Jun 28, 2019
Vikramjit Mitra, Sue Booker, Erik Marchi, David Scott Farrar, Ute Dorothea Peitz, Bridget Cheng, Ermine Teves, Anuj Mehta, Devang Naik

Figure 1 for Leveraging Acoustic Cues and Paralinguistic Embeddings to Detect Expression from Voice
Figure 2 for Leveraging Acoustic Cues and Paralinguistic Embeddings to Detect Expression from Voice
Figure 3 for Leveraging Acoustic Cues and Paralinguistic Embeddings to Detect Expression from Voice
Figure 4 for Leveraging Acoustic Cues and Paralinguistic Embeddings to Detect Expression from Voice
Viaarxiv icon

Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech

Add code
Bookmark button
Alert button
May 21, 2019
Emre Yılmaz, Vikramjit Mitra, Ganesh Sivaraman, Horacio Franco

Figure 1 for Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech
Figure 2 for Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech
Figure 3 for Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech
Figure 4 for Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech
Viaarxiv icon