Alert button
Picture for Emmanuel Dupoux

Emmanuel Dupoux

Alert button

IntPhys: A Framework and Benchmark for Visual Intuitive Physics Reasoning

Add code
Bookmark button
Alert button
Jun 26, 2018
Ronan Riochet, Mario Ynocente Castro, Mathieu Bernard, Adam Lerer, Rob Fergus, Véronique Izard, Emmanuel Dupoux

Figure 1 for IntPhys: A Framework and Benchmark for Visual Intuitive Physics Reasoning
Figure 2 for IntPhys: A Framework and Benchmark for Visual Intuitive Physics Reasoning
Figure 3 for IntPhys: A Framework and Benchmark for Visual Intuitive Physics Reasoning
Figure 4 for IntPhys: A Framework and Benchmark for Visual Intuitive Physics Reasoning
Viaarxiv icon

End-to-End Speech Recognition From the Raw Waveform

Add code
Bookmark button
Alert button
Jun 21, 2018
Neil Zeghidour, Nicolas Usunier, Gabriel Synnaeve, Ronan Collobert, Emmanuel Dupoux

Figure 1 for End-to-End Speech Recognition From the Raw Waveform
Figure 2 for End-to-End Speech Recognition From the Raw Waveform
Figure 3 for End-to-End Speech Recognition From the Raw Waveform
Figure 4 for End-to-End Speech Recognition From the Raw Waveform
Viaarxiv icon

Learning Filterbanks from Raw Speech for Phone Recognition

Add code
Bookmark button
Alert button
Apr 04, 2018
Neil Zeghidour, Nicolas Usunier, Iasonas Kokkinos, Thomas Schatz, Gabriel Synnaeve, Emmanuel Dupoux

Figure 1 for Learning Filterbanks from Raw Speech for Phone Recognition
Figure 2 for Learning Filterbanks from Raw Speech for Phone Recognition
Figure 3 for Learning Filterbanks from Raw Speech for Phone Recognition
Figure 4 for Learning Filterbanks from Raw Speech for Phone Recognition
Viaarxiv icon

Bayesian Models for Unit Discovery on a Very Low Resource Language

Add code
Bookmark button
Alert button
Feb 20, 2018
Lucas Ondel, Pierre Godard, Laurent Besacier, Elin Larsen, Mark Hasegawa-Johnson, Odette Scharenborg, Emmanuel Dupoux, Lukas Burget, François Yvon, Sanjeev Khudanpur

Figure 1 for Bayesian Models for Unit Discovery on a Very Low Resource Language
Figure 2 for Bayesian Models for Unit Discovery on a Very Low Resource Language
Figure 3 for Bayesian Models for Unit Discovery on a Very Low Resource Language
Figure 4 for Bayesian Models for Unit Discovery on a Very Low Resource Language
Viaarxiv icon

Cognitive Science in the era of Artificial Intelligence: A roadmap for reverse-engineering the infant language-learner

Add code
Bookmark button
Alert button
Feb 14, 2018
Emmanuel Dupoux

Figure 1 for Cognitive Science in the era of Artificial Intelligence: A roadmap for reverse-engineering the infant language-learner
Figure 2 for Cognitive Science in the era of Artificial Intelligence: A roadmap for reverse-engineering the infant language-learner
Figure 3 for Cognitive Science in the era of Artificial Intelligence: A roadmap for reverse-engineering the infant language-learner
Figure 4 for Cognitive Science in the era of Artificial Intelligence: A roadmap for reverse-engineering the infant language-learner
Viaarxiv icon

Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop

Add code
Bookmark button
Alert button
Feb 14, 2018
Odette Scharenborg, Laurent Besacier, Alan Black, Mark Hasegawa-Johnson, Florian Metze, Graham Neubig, Sebastian Stueker, Pierre Godard, Markus Mueller, Lucas Ondel, Shruti Palaskar, Philip Arthur, Francesco Ciannella, Mingxing Du, Elin Larsen, Danny Merkx, Rachid Riad, Liming Wang, Emmanuel Dupoux

Figure 1 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Figure 2 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Figure 3 for Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop
Viaarxiv icon

Are words easier to learn from infant- than adult-directed speech? A quantitative corpus-based investigation

Add code
Bookmark button
Alert button
Dec 23, 2017
Adriana Guevara-Rukoz, Alejandrina Cristia, Bogdan Ludusan, Roland Thiollière, Andrew Martin, Reiko Mazuka, Emmanuel Dupoux

Figure 1 for Are words easier to learn from infant- than adult-directed speech? A quantitative corpus-based investigation
Figure 2 for Are words easier to learn from infant- than adult-directed speech? A quantitative corpus-based investigation
Figure 3 for Are words easier to learn from infant- than adult-directed speech? A quantitative corpus-based investigation
Figure 4 for Are words easier to learn from infant- than adult-directed speech? A quantitative corpus-based investigation
Viaarxiv icon

The Zero Resource Speech Challenge 2017

Add code
Bookmark button
Alert button
Dec 12, 2017
Ewan Dunbar, Xuan Nga Cao, Juan Benjumea, Julien Karadayi, Mathieu Bernard, Laurent Besacier, Xavier Anguera, Emmanuel Dupoux

Figure 1 for The Zero Resource Speech Challenge 2017
Figure 2 for The Zero Resource Speech Challenge 2017
Figure 3 for The Zero Resource Speech Challenge 2017
Viaarxiv icon

Learning weakly supervised multimodal phoneme embeddings

Add code
Bookmark button
Alert button
Oct 18, 2017
Rahma Chaabouni, Ewan Dunbar, Neil Zeghidour, Emmanuel Dupoux

Figure 1 for Learning weakly supervised multimodal phoneme embeddings
Figure 2 for Learning weakly supervised multimodal phoneme embeddings
Figure 3 for Learning weakly supervised multimodal phoneme embeddings
Figure 4 for Learning weakly supervised multimodal phoneme embeddings
Viaarxiv icon

Blind phoneme segmentation with temporal prediction errors

Add code
Bookmark button
Alert button
May 27, 2017
Paul Michel, Okko Räsänen, Roland Thiollière, Emmanuel Dupoux

Figure 1 for Blind phoneme segmentation with temporal prediction errors
Figure 2 for Blind phoneme segmentation with temporal prediction errors
Figure 3 for Blind phoneme segmentation with temporal prediction errors
Figure 4 for Blind phoneme segmentation with temporal prediction errors
Viaarxiv icon