Alert button
Picture for Danny Merkx

Danny Merkx

Alert button

Modelling word learning and recognition using visually grounded speech

Add code
Bookmark button
Alert button
Mar 14, 2022
Danny Merkx, Sebastiaan Scholten, Stefan L. Frank, Mirjam Ernestus, Odette Scharenborg

Figure 1 for Modelling word learning and recognition using visually grounded speech
Figure 2 for Modelling word learning and recognition using visually grounded speech
Figure 3 for Modelling word learning and recognition using visually grounded speech
Figure 4 for Modelling word learning and recognition using visually grounded speech
Viaarxiv icon

Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge

Add code
Bookmark button
Alert button
Feb 21, 2022
Danny Merkx, Stefan L. Frank, Mirjam Ernestus

Figure 1 for Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
Figure 2 for Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
Figure 3 for Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
Figure 4 for Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
Viaarxiv icon

Semantic sentence similarity: size does not always matter

Add code
Bookmark button
Alert button
Jun 16, 2021
Danny Merkx, Stefan L. Frank, Mirjam Ernestus

Figure 1 for Semantic sentence similarity: size does not always matter
Figure 2 for Semantic sentence similarity: size does not always matter
Figure 3 for Semantic sentence similarity: size does not always matter
Figure 4 for Semantic sentence similarity: size does not always matter
Viaarxiv icon

Learning to Recognise Words using Visually Grounded Speech

Add code
Bookmark button
Alert button
May 31, 2020
Sebastiaan Scholten, Danny Merkx, Odette Scharenborg

Figure 1 for Learning to Recognise Words using Visually Grounded Speech
Figure 2 for Learning to Recognise Words using Visually Grounded Speech
Figure 3 for Learning to Recognise Words using Visually Grounded Speech
Figure 4 for Learning to Recognise Words using Visually Grounded Speech
Viaarxiv icon

Comparing Transformers and RNNs on predicting human sentence processing data

Add code
Bookmark button
Alert button
May 19, 2020
Danny Merkx, Stefan L. Frank

Figure 1 for Comparing Transformers and RNNs on predicting human sentence processing data
Figure 2 for Comparing Transformers and RNNs on predicting human sentence processing data
Figure 3 for Comparing Transformers and RNNs on predicting human sentence processing data
Figure 4 for Comparing Transformers and RNNs on predicting human sentence processing data
Viaarxiv icon

Language learning using Speech to Image retrieval

Add code
Bookmark button
Alert button
Sep 09, 2019
Danny Merkx, Stefan L. Frank, Mirjam Ernestus

Figure 1 for Language learning using Speech to Image retrieval
Figure 2 for Language learning using Speech to Image retrieval
Figure 3 for Language learning using Speech to Image retrieval
Figure 4 for Language learning using Speech to Image retrieval
Viaarxiv icon

Learning semantic sentence representations from visually grounded language without lexical knowledge

Add code
Bookmark button
Alert button
Mar 27, 2019
Danny Merkx, Stefan Frank

Figure 1 for Learning semantic sentence representations from visually grounded language without lexical knowledge
Figure 2 for Learning semantic sentence representations from visually grounded language without lexical knowledge
Figure 3 for Learning semantic sentence representations from visually grounded language without lexical knowledge
Figure 4 for Learning semantic sentence representations from visually grounded language without lexical knowledge
Viaarxiv icon

Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop

Add code
Bookmark button
Alert button
Feb 14, 2018
Odette Scharenborg, Laurent Besacier, Alan Black, Mark Hasegawa-Johnson, Florian Metze, Graham Neubig, Sebastian Stueker, Pierre Godard, Markus Mueller, Lucas Ondel, Shruti Palaskar, Philip Arthur, Francesco Ciannella, Mingxing Du, Elin Larsen, Danny Merkx, Rachid Riad, Liming Wang, Emmanuel Dupoux

Figure 1 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Figure 2 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Figure 3 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Viaarxiv icon