Picture for Sharon Goldwater

Sharon Goldwater

Analyzing the relationships between pretraining language, phonetic, tonal, and speaker information in self-supervised speech models

Add code
Jun 12, 2025
Figure 1 for Analyzing the relationships between pretraining language, phonetic, tonal, and speaker information in self-supervised speech models
Figure 2 for Analyzing the relationships between pretraining language, phonetic, tonal, and speaker information in self-supervised speech models
Figure 3 for Analyzing the relationships between pretraining language, phonetic, tonal, and speaker information in self-supervised speech models
Figure 4 for Analyzing the relationships between pretraining language, phonetic, tonal, and speaker information in self-supervised speech models
Viaarxiv icon

Effective Context in Neural Speech Models

Add code
May 28, 2025
Figure 1 for Effective Context in Neural Speech Models
Figure 2 for Effective Context in Neural Speech Models
Figure 3 for Effective Context in Neural Speech Models
Figure 4 for Effective Context in Neural Speech Models
Viaarxiv icon

Revisiting Common Assumptions about Arabic Dialects in NLP

Add code
May 27, 2025
Figure 1 for Revisiting Common Assumptions about Arabic Dialects in NLP
Figure 2 for Revisiting Common Assumptions about Arabic Dialects in NLP
Figure 3 for Revisiting Common Assumptions about Arabic Dialects in NLP
Figure 4 for Revisiting Common Assumptions about Arabic Dialects in NLP
Viaarxiv icon

A Grounded Typology of Word Classes

Add code
Dec 13, 2024
Viaarxiv icon

Orthogonality and isotropy of speaker and phonetic information in self-supervised speech representations

Add code
Jun 13, 2024
Figure 1 for Orthogonality and isotropy of speaker and phonetic information in self-supervised speech representations
Figure 2 for Orthogonality and isotropy of speaker and phonetic information in self-supervised speech representations
Figure 3 for Orthogonality and isotropy of speaker and phonetic information in self-supervised speech representations
Viaarxiv icon

Estimating the Level of Dialectness Predicts Interannotator Agreement in Multi-dialect Arabic Datasets

Add code
May 18, 2024
Figure 1 for Estimating the Level of Dialectness Predicts Interannotator Agreement in Multi-dialect Arabic Datasets
Figure 2 for Estimating the Level of Dialectness Predicts Interannotator Agreement in Multi-dialect Arabic Datasets
Figure 3 for Estimating the Level of Dialectness Predicts Interannotator Agreement in Multi-dialect Arabic Datasets
Viaarxiv icon

A predictive learning model can simulate temporal dynamics and context effects found in neural representations of continuous speech

Add code
May 13, 2024
Figure 1 for A predictive learning model can simulate temporal dynamics and context effects found in neural representations of continuous speech
Figure 2 for A predictive learning model can simulate temporal dynamics and context effects found in neural representations of continuous speech
Figure 3 for A predictive learning model can simulate temporal dynamics and context effects found in neural representations of continuous speech
Figure 4 for A predictive learning model can simulate temporal dynamics and context effects found in neural representations of continuous speech
Viaarxiv icon

ALDi: Quantifying the Arabic Level of Dialectness of Text

Add code
Oct 20, 2023
Figure 1 for ALDi: Quantifying the Arabic Level of Dialectness of Text
Figure 2 for ALDi: Quantifying the Arabic Level of Dialectness of Text
Figure 3 for ALDi: Quantifying the Arabic Level of Dialectness of Text
Figure 4 for ALDi: Quantifying the Arabic Level of Dialectness of Text
Viaarxiv icon

Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling

Add code
Jun 03, 2023
Figure 1 for Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling
Figure 2 for Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling
Figure 3 for Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling
Figure 4 for Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling
Viaarxiv icon

Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces

Add code
May 21, 2023
Figure 1 for Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces
Figure 2 for Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces
Figure 3 for Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces
Figure 4 for Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces
Viaarxiv icon