Alert button
Picture for Yevgen Matusevych

Yevgen Matusevych

Alert button

Visually Grounded Speech Models have a Mutual Exclusivity Bias

Add code
Bookmark button
Alert button
Mar 20, 2024
Leanne Nortje, Dan Oneaţă, Yevgen Matusevych, Herman Kamper

Figure 1 for Visually Grounded Speech Models have a Mutual Exclusivity Bias
Figure 2 for Visually Grounded Speech Models have a Mutual Exclusivity Bias
Figure 3 for Visually Grounded Speech Models have a Mutual Exclusivity Bias
Figure 4 for Visually Grounded Speech Models have a Mutual Exclusivity Bias
Viaarxiv icon

Acoustic word embeddings for zero-resource languages using self-supervised contrastive learning and multilingual adaptation

Add code
Bookmark button
Alert button
Mar 19, 2021
Christiaan Jacobs, Yevgen Matusevych, Herman Kamper

Figure 1 for Acoustic word embeddings for zero-resource languages using self-supervised contrastive learning and multilingual adaptation
Figure 2 for Acoustic word embeddings for zero-resource languages using self-supervised contrastive learning and multilingual adaptation
Figure 3 for Acoustic word embeddings for zero-resource languages using self-supervised contrastive learning and multilingual adaptation
Figure 4 for Acoustic word embeddings for zero-resource languages using self-supervised contrastive learning and multilingual adaptation
Viaarxiv icon

A phonetic model of non-native spoken word processing

Add code
Bookmark button
Alert button
Jan 27, 2021
Yevgen Matusevych, Herman Kamper, Thomas Schatz, Naomi H. Feldman, Sharon Goldwater

Figure 1 for A phonetic model of non-native spoken word processing
Figure 2 for A phonetic model of non-native spoken word processing
Figure 3 for A phonetic model of non-native spoken word processing
Figure 4 for A phonetic model of non-native spoken word processing
Viaarxiv icon

Evaluating computational models of infant phonetic learning across languages

Add code
Bookmark button
Alert button
Aug 06, 2020
Yevgen Matusevych, Thomas Schatz, Herman Kamper, Naomi H. Feldman, Sharon Goldwater

Figure 1 for Evaluating computational models of infant phonetic learning across languages
Figure 2 for Evaluating computational models of infant phonetic learning across languages
Figure 3 for Evaluating computational models of infant phonetic learning across languages
Figure 4 for Evaluating computational models of infant phonetic learning across languages
Viaarxiv icon

Improved acoustic word embeddings for zero-resource languages using multilingual transfer

Add code
Bookmark button
Alert button
Jun 02, 2020
Herman Kamper, Yevgen Matusevych, Sharon Goldwater

Figure 1 for Improved acoustic word embeddings for zero-resource languages using multilingual transfer
Figure 2 for Improved acoustic word embeddings for zero-resource languages using multilingual transfer
Figure 3 for Improved acoustic word embeddings for zero-resource languages using multilingual transfer
Figure 4 for Improved acoustic word embeddings for zero-resource languages using multilingual transfer
Viaarxiv icon

Analyzing autoencoder-based acoustic word embeddings

Add code
Bookmark button
Alert button
Apr 03, 2020
Yevgen Matusevych, Herman Kamper, Sharon Goldwater

Figure 1 for Analyzing autoencoder-based acoustic word embeddings
Figure 2 for Analyzing autoencoder-based acoustic word embeddings
Figure 3 for Analyzing autoencoder-based acoustic word embeddings
Figure 4 for Analyzing autoencoder-based acoustic word embeddings
Viaarxiv icon

Multilingual acoustic word embedding models for processing zero-resource languages

Add code
Bookmark button
Alert button
Feb 21, 2020
Herman Kamper, Yevgen Matusevych, Sharon Goldwater

Figure 1 for Multilingual acoustic word embedding models for processing zero-resource languages
Figure 2 for Multilingual acoustic word embedding models for processing zero-resource languages
Figure 3 for Multilingual acoustic word embedding models for processing zero-resource languages
Figure 4 for Multilingual acoustic word embedding models for processing zero-resource languages
Viaarxiv icon

Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection

Add code
Bookmark button
Alert button
Jun 04, 2019
Maria Corkery, Yevgen Matusevych, Sharon Goldwater

Figure 1 for Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection
Figure 2 for Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection
Figure 3 for Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection
Figure 4 for Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection
Viaarxiv icon