Alert button
Picture for Leanne Nortje

Leanne Nortje

Alert button

Visually Grounded Speech Models have a Mutual Exclusivity Bias

Add code
Bookmark button
Alert button
Mar 20, 2024
Leanne Nortje, Dan Oneaţă, Yevgen Matusevych, Herman Kamper

Figure 1 for Visually Grounded Speech Models have a Mutual Exclusivity Bias
Figure 2 for Visually Grounded Speech Models have a Mutual Exclusivity Bias
Figure 3 for Visually Grounded Speech Models have a Mutual Exclusivity Bias
Figure 4 for Visually Grounded Speech Models have a Mutual Exclusivity Bias
Viaarxiv icon

Visually grounded few-shot word learning in low-resource settings

Add code
Bookmark button
Alert button
Jun 21, 2023
Leanne Nortje, Dan Oneata, Herman Kamper

Figure 1 for Visually grounded few-shot word learning in low-resource settings
Figure 2 for Visually grounded few-shot word learning in low-resource settings
Figure 3 for Visually grounded few-shot word learning in low-resource settings
Figure 4 for Visually grounded few-shot word learning in low-resource settings
Viaarxiv icon

Visually grounded few-shot word acquisition with fewer shots

Add code
Bookmark button
Alert button
May 25, 2023
Leanne Nortje, Benjamin van Niekerk, Herman Kamper

Figure 1 for Visually grounded few-shot word acquisition with fewer shots
Figure 2 for Visually grounded few-shot word acquisition with fewer shots
Figure 3 for Visually grounded few-shot word acquisition with fewer shots
Figure 4 for Visually grounded few-shot word acquisition with fewer shots
Viaarxiv icon

Towards visually prompted keyword localisation for zero-resource spoken languages

Add code
Bookmark button
Alert button
Oct 12, 2022
Leanne Nortje, Herman Kamper

Figure 1 for Towards visually prompted keyword localisation for zero-resource spoken languages
Figure 2 for Towards visually prompted keyword localisation for zero-resource spoken languages
Figure 3 for Towards visually prompted keyword localisation for zero-resource spoken languages
Figure 4 for Towards visually prompted keyword localisation for zero-resource spoken languages
Viaarxiv icon

Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing

Add code
Bookmark button
Alert button
Aug 02, 2021
Benjamin van Niekerk, Leanne Nortje, Matthew Baas, Herman Kamper

Figure 1 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Figure 2 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Figure 3 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Figure 4 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Viaarxiv icon

Direct multimodal few-shot learning of speech and images

Add code
Bookmark button
Alert button
Dec 10, 2020
Leanne Nortje, Herman Kamper

Figure 1 for Direct multimodal few-shot learning of speech and images
Figure 2 for Direct multimodal few-shot learning of speech and images
Figure 3 for Direct multimodal few-shot learning of speech and images
Figure 4 for Direct multimodal few-shot learning of speech and images
Viaarxiv icon

Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images

Add code
Bookmark button
Alert button
Aug 14, 2020
Leanne Nortje, Herman Kamper

Figure 1 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Figure 2 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Figure 3 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Figure 4 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Viaarxiv icon

Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge

Add code
Bookmark button
Alert button
May 19, 2020
Benjamin van Niekerk, Leanne Nortje, Herman Kamper

Figure 1 for Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge
Figure 2 for Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge
Figure 3 for Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge
Figure 4 for Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge
Viaarxiv icon

Unsupervised acoustic unit discovery for speech synthesis using discrete latent-variable neural networks

Add code
Bookmark button
Alert button
Apr 16, 2019
Ryan Eloff, André Nortje, Benjamin van Niekerk, Avashna Govender, Leanne Nortje, Arnu Pretorius, Elan van Biljon, Ewald van der Westhuizen, Lisa van Staden, Herman Kamper

Figure 1 for Unsupervised acoustic unit discovery for speech synthesis using discrete latent-variable neural networks
Figure 2 for Unsupervised acoustic unit discovery for speech synthesis using discrete latent-variable neural networks
Figure 3 for Unsupervised acoustic unit discovery for speech synthesis using discrete latent-variable neural networks
Viaarxiv icon