Picture for Leanne Nortje

Leanne Nortje

The mutual exclusivity bias of bilingual visually grounded speech models

Add code
Jun 04, 2025
Viaarxiv icon

Improved Visually Prompted Keyword Localisation in Real Low-Resource Settings

Add code
Sep 09, 2024
Figure 1 for Improved Visually Prompted Keyword Localisation in Real Low-Resource Settings
Figure 2 for Improved Visually Prompted Keyword Localisation in Real Low-Resource Settings
Figure 3 for Improved Visually Prompted Keyword Localisation in Real Low-Resource Settings
Figure 4 for Improved Visually Prompted Keyword Localisation in Real Low-Resource Settings
Viaarxiv icon

Visually Grounded Speech Models for Low-resource Languages and Cognitive Modelling

Add code
Sep 03, 2024
Viaarxiv icon

Visually Grounded Speech Models have a Mutual Exclusivity Bias

Add code
Mar 20, 2024
Viaarxiv icon

Visually grounded few-shot word learning in low-resource settings

Add code
Jun 21, 2023
Figure 1 for Visually grounded few-shot word learning in low-resource settings
Figure 2 for Visually grounded few-shot word learning in low-resource settings
Figure 3 for Visually grounded few-shot word learning in low-resource settings
Figure 4 for Visually grounded few-shot word learning in low-resource settings
Viaarxiv icon

Visually grounded few-shot word acquisition with fewer shots

Add code
May 25, 2023
Figure 1 for Visually grounded few-shot word acquisition with fewer shots
Figure 2 for Visually grounded few-shot word acquisition with fewer shots
Figure 3 for Visually grounded few-shot word acquisition with fewer shots
Figure 4 for Visually grounded few-shot word acquisition with fewer shots
Viaarxiv icon

Towards visually prompted keyword localisation for zero-resource spoken languages

Add code
Oct 12, 2022
Figure 1 for Towards visually prompted keyword localisation for zero-resource spoken languages
Figure 2 for Towards visually prompted keyword localisation for zero-resource spoken languages
Figure 3 for Towards visually prompted keyword localisation for zero-resource spoken languages
Figure 4 for Towards visually prompted keyword localisation for zero-resource spoken languages
Viaarxiv icon

Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing

Add code
Aug 02, 2021
Figure 1 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Figure 2 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Figure 3 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Figure 4 for Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing
Viaarxiv icon

Direct multimodal few-shot learning of speech and images

Add code
Dec 10, 2020
Figure 1 for Direct multimodal few-shot learning of speech and images
Figure 2 for Direct multimodal few-shot learning of speech and images
Figure 3 for Direct multimodal few-shot learning of speech and images
Figure 4 for Direct multimodal few-shot learning of speech and images
Viaarxiv icon

Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images

Add code
Aug 14, 2020
Figure 1 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Figure 2 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Figure 3 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Figure 4 for Unsupervised vs. transfer learning for multimodal one-shot matching of speech and images
Viaarxiv icon