Alert button
Picture for Sharon Goldwater

Sharon Goldwater

Alert button

ALDi: Quantifying the Arabic Level of Dialectness of Text

Oct 20, 2023
Amr Keleg, Sharon Goldwater, Walid Magdy

Transcribed speech and user-generated text in Arabic typically contain a mixture of Modern Standard Arabic (MSA), the standardized language taught in schools, and Dialectal Arabic (DA), used in daily communications. To handle this variation, previous work in Arabic NLP has focused on Dialect Identification (DI) on the sentence or the token level. However, DI treats the task as binary, whereas we argue that Arabic speakers perceive a spectrum of dialectness, which we operationalize at the sentence level as the Arabic Level of Dialectness (ALDi), a continuous linguistic variable. We introduce the AOC-ALDi dataset (derived from the AOC dataset), containing 127,835 sentences (17% from news articles and 83% from user comments on those articles) which are manually labeled with their level of dialectness. We provide a detailed analysis of AOC-ALDi and show that a model trained on it can effectively identify levels of dialectness on a range of other corpora (including dialects and genres not included in AOC-ALDi), providing a more nuanced picture than traditional DI systems. Through case studies, we illustrate how ALDi can reveal Arabic speakers' stylistic choices in different situations, a useful property for sociolinguistic analyses.

* Accepted to EMNLP 2023 
Viaarxiv icon

Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling

Jun 03, 2023
Ramon Sanabria, Ondrej Klejch, Hao Tang, Sharon Goldwater

Figure 1 for Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling
Figure 2 for Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling
Figure 3 for Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling
Figure 4 for Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling

Acoustic word embeddings are typically created by training a pooling function using pairs of word-like units. For unsupervised systems, these are mined using k-nearest neighbor (KNN) search, which is slow. Recently, mean-pooled representations from a pre-trained self-supervised English model were suggested as a promising alternative, but their performance on target languages was not fully competitive. Here, we explore improvements to both approaches: we use continued pre-training to adapt the self-supervised model to the target language, and we use a multilingual phone recognizer (MPR) to mine phone n-gram pairs for training the pooling function. Evaluating on four languages, we show that both methods outperform a recent approach on word discrimination. Moreover, the MPR method is orders of magnitude faster than KNN, and is highly data efficient. We also show a small improvement from performing learned pooling on top of the continued pre-trained representations.

* Accepted to Interspeech 2023 
Viaarxiv icon

Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces

May 21, 2023
Oli Liu, Hao Tang, Sharon Goldwater

Figure 1 for Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces
Figure 2 for Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces
Figure 3 for Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces
Figure 4 for Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces

Self-supervised speech representations are known to encode both speaker and phonetic information, but how they are distributed in the high-dimensional space remains largely unexplored. We hypothesize that they are encoded in orthogonal subspaces, a property that lends itself to simple disentanglement. Applying principal component analysis to representations of two predictive coding models, we identify two subspaces that capture speaker and phonetic variances, and confirm that they are nearly orthogonal. Based on this property, we propose a new speaker normalization method which collapses the subspace that encodes speaker information, without requiring transcriptions. Probing experiments show that our method effectively eliminates speaker information and outperforms a previous baseline in phone discrimination tasks. Moreover, the approach generalizes and can be used to remove information of unseen speakers.

Viaarxiv icon

Prosodic features improve sentence segmentation and parsing

Feb 23, 2023
Elizabeth Nielsen, Sharon Goldwater, Mark Steedman

Figure 1 for Prosodic features improve sentence segmentation and parsing
Figure 2 for Prosodic features improve sentence segmentation and parsing
Figure 3 for Prosodic features improve sentence segmentation and parsing
Figure 4 for Prosodic features improve sentence segmentation and parsing

Parsing spoken dialogue presents challenges that parsing text does not, including a lack of clear sentence boundaries. We know from previous work that prosody helps in parsing single sentences (Tran et al. 2018), but we want to show the effect of prosody on parsing speech that isn't segmented into sentences. In experiments on the English Switchboard corpus, we find prosody helps our model both with parsing and with accurately identifying sentence boundaries. However, we find that the best-performing parser is not necessarily the parser that produces the best sentence segmentation performance. We suggest that the best parses instead come from modelling sentence boundaries jointly with other constituent boundaries.

* arXiv admin note: text overlap with arXiv:2105.12667 
Viaarxiv icon

Analyzing Acoustic Word Embeddings from Pre-trained Self-supervised Speech Models

Oct 28, 2022
Ramon Sanabria, Hao Tang, Sharon Goldwater

Figure 1 for Analyzing Acoustic Word Embeddings from Pre-trained Self-supervised Speech Models
Figure 2 for Analyzing Acoustic Word Embeddings from Pre-trained Self-supervised Speech Models
Figure 3 for Analyzing Acoustic Word Embeddings from Pre-trained Self-supervised Speech Models
Figure 4 for Analyzing Acoustic Word Embeddings from Pre-trained Self-supervised Speech Models

Given the strong results of self-supervised models on various tasks, there have been surprisingly few studies exploring self-supervised representations for acoustic word embeddings (AWE), fixed-dimensional vectors representing variable-length spoken word segments. In this work, we study several pre-trained models and pooling methods for constructing AWEs with self-supervised representations. Owing to the contextualized nature of self-supervised representations, we hypothesize that simple pooling methods, such as averaging, might already be useful for constructing AWEs. When evaluating on a standard word discrimination task, we find that HuBERT representations with mean-pooling rival the state of the art on English AWEs. More surprisingly, despite being trained only on English, HuBERT representations evaluated on Xitsonga, Mandarin, and French consistently outperform the multilingual model XLSR-53 (as well as Wav2Vec 2.0 trained on English).

* Submitted to IEEE ICASSP 2023 
Viaarxiv icon

Cross-linguistically Consistent Semantic and Syntactic Annotation of Child-directed Speech

Sep 22, 2021
Ida Szubert, Omri Abend, Nathan Schneider, Samuel Gibbon, Sharon Goldwater, Mark Steedman

Figure 1 for Cross-linguistically Consistent Semantic and Syntactic Annotation of Child-directed Speech
Figure 2 for Cross-linguistically Consistent Semantic and Syntactic Annotation of Child-directed Speech
Figure 3 for Cross-linguistically Consistent Semantic and Syntactic Annotation of Child-directed Speech
Figure 4 for Cross-linguistically Consistent Semantic and Syntactic Annotation of Child-directed Speech

While corpora of child speech and child-directed speech (CDS) have enabled major contributions to the study of child language acquisition, semantic annotation for such corpora is still scarce and lacks a uniform standard. We compile two CDS corpora with sentential logical forms, one in English and the other in Hebrew. In compiling the corpora we employ a methodology that enforces a cross-linguistically consistent representation, building on recent advances in dependency representation and semantic parsing. The corpora are based on a sizable portion of Brown's Adam corpus from CHILDES (about 80% of its child-directed utterances), and to all child-directed utterances from Berman's Hebrew CHILDES corpus Hagar. We begin by annotating the corpora with the Universal Dependencies (UD) scheme for syntactic annotation, motivated by its applicability to a wide variety of domains and languages. We then proceed by applying an automatic method for transducing sentential logical forms (LFs) from UD structures. The two representations have complementary strengths: UD structures are language-neutral and support direct annotation, whereas LFs are neutral as to the interface between syntax and semantics, and transparently encode semantic distinctions. We verify the quality of the annotated UD annotation using an inter-annotator agreement study. We then demonstrate the utility of the compiled corpora through a longitudinal corpus study of the prevalence of different syntactic and semantic phenomena.

Viaarxiv icon

On the Difficulty of Segmenting Words with Attention

Sep 21, 2021
Ramon Sanabria, Hao Tang, Sharon Goldwater

Figure 1 for On the Difficulty of Segmenting Words with Attention
Figure 2 for On the Difficulty of Segmenting Words with Attention
Figure 3 for On the Difficulty of Segmenting Words with Attention
Figure 4 for On the Difficulty of Segmenting Words with Attention

Word segmentation, the problem of finding word boundaries in speech, is of interest for a range of tasks. Previous papers have suggested that for sequence-to-sequence models trained on tasks such as speech translation or speech recognition, attention can be used to locate and segment the words. We show, however, that even on monolingual data this approach is brittle. In our experiments with different input types, data sizes, and segmentation algorithms, only models trained to predict phones from words succeed in the task. Models trained to predict words from either phones or speech (i.e., the opposite direction needed to generalize to new data), yield much worse results, suggesting that attention-based segmentation is only useful in limited scenarios.

* Accepted at the "Workshop on Insights from Negative Results in NLP" (EMNLP 2021) 
Viaarxiv icon

Prosodic segmentation for parsing spoken dialogue

May 26, 2021
Elizabeth Nielsen, Mark Steedman, Sharon Goldwater

Figure 1 for Prosodic segmentation for parsing spoken dialogue
Figure 2 for Prosodic segmentation for parsing spoken dialogue
Figure 3 for Prosodic segmentation for parsing spoken dialogue
Figure 4 for Prosodic segmentation for parsing spoken dialogue

Parsing spoken dialogue poses unique difficulties, including disfluencies and unmarked boundaries between sentence-like units. Previous work has shown that prosody can help with parsing disfluent speech (Tran et al. 2018), but has assumed that the input to the parser is already segmented into sentence-like units (SUs), which isn't true in existing speech applications. We investigate how prosody affects a parser that receives an entire dialogue turn as input (a turn-based model), instead of gold standard pre-segmented SUs (an SU-based model). In experiments on the English Switchboard corpus, we find that when using transcripts alone, the turn-based model has trouble segmenting SUs, leading to worse parse performance than the SU-based model. However, prosody can effectively replace gold standard SU boundaries: with prosody, the turn-based model performs as well as the SU-based model (90.79 vs. 90.65 F1 score, respectively), despite performing two tasks (SU segmentation and parsing) rather than one (parsing alone). Analysis shows that pitch and intensity features are the most important for this corpus, since they allow the model to correctly distinguish an SU boundary from a speech disfluency -- a distinction that the model otherwise struggles to make.

Viaarxiv icon

Black or White but never neutral: How readers perceive identity from yellow or skin-toned emoji

May 12, 2021
Alexander Robertson, Walid Magdy, Sharon Goldwater

Research in sociology and linguistics shows that people use language not only to express their own identity but to understand the identity of others. Recent work established a connection between expression of identity and emoji usage on social media, through use of emoji skin tone modifiers. Motivated by that finding, this work asks if, as with language, readers are sensitive to such acts of self-expression and use them to understand the identity of authors. In behavioral experiments (n=488), where text and emoji content of social media posts were carefully controlled before being presented to participants, we find in the affirmative -- emoji are a salient signal of author identity. That signal is distinct from, and complementary to, the one encoded in language. Participant groups (based on self-identified ethnicity) showed no differences in how they perceive this signal, except in the case of the default yellow emoji. While both groups associate this with a White identity, the effect was stronger in White participants. Our finding that emoji can index social variables will have experimental applications for researchers but also implications for designers: supposedly ``neutral`` defaults may be more representative of some users than others.

Viaarxiv icon

Identity Signals in Emoji Do not Influence Perception of Factual Truth on Twitter

May 07, 2021
Alexander Robertson, Walid Magdy, Sharon Goldwater

Figure 1 for Identity Signals in Emoji Do not Influence Perception of Factual Truth on Twitter
Figure 2 for Identity Signals in Emoji Do not Influence Perception of Factual Truth on Twitter
Figure 3 for Identity Signals in Emoji Do not Influence Perception of Factual Truth on Twitter
Figure 4 for Identity Signals in Emoji Do not Influence Perception of Factual Truth on Twitter

Prior work has shown that Twitter users use skin-toned emoji as an act of self-representation to express their racial/ethnic identity. We test whether this signal of identity can influence readers' perceptions about the content of a post containing that signal. In a large scale (n=944) pre-registered controlled experiment, we manipulate the presence of skin-toned emoji and profile photos in a task where readers rate obscure trivia facts (presented as tweets) as true or false. Using a Bayesian statistical analysis, we find that neither emoji nor profile photo has an effect on how readers rate these facts. This result will be of some comfort to anyone concerned about the manipulation of online users through the crafting of fake profiles.

Viaarxiv icon