Alert button
Picture for Noé Tits

Noé Tits

Alert button

MUST&P-SRL: Multi-lingual and Unified Syllabification in Text and Phonetic Domains for Speech Representation Learning

Add code
Bookmark button
Alert button
Oct 17, 2023
Noé Tits

Viaarxiv icon

Flowchase: a Mobile Application for Pronunciation Training

Add code
Bookmark button
Alert button
Jul 05, 2023
Noé Tits, Zoé Broisson

Figure 1 for Flowchase: a Mobile Application for Pronunciation Training
Figure 2 for Flowchase: a Mobile Application for Pronunciation Training
Figure 3 for Flowchase: a Mobile Application for Pronunciation Training
Viaarxiv icon

Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity

Add code
Bookmark button
Alert button
Jan 11, 2022
Victor Delvigne, Noé Tits, Luca La Fisca, Nathan Hubens, Antoine Maiorca, Hazem Wannous, Thierry Dutoit, Jean-Philippe Vandeborre

Figure 1 for Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity
Figure 2 for Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity
Figure 3 for Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity
Figure 4 for Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity
Viaarxiv icon

Analysis and Assessment of Controllability of an Expressive Deep Learning-based TTS system

Add code
Bookmark button
Alert button
Mar 06, 2021
Noé Tits, Kevin El Haddad, Thierry Dutoit

Figure 1 for Analysis and Assessment of Controllability of an Expressive Deep Learning-based TTS system
Figure 2 for Analysis and Assessment of Controllability of an Expressive Deep Learning-based TTS system
Figure 3 for Analysis and Assessment of Controllability of an Expressive Deep Learning-based TTS system
Figure 4 for Analysis and Assessment of Controllability of an Expressive Deep Learning-based TTS system
Viaarxiv icon

Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition

Add code
Bookmark button
Alert button
Oct 05, 2020
Jean-Benoit Delbrouck, Noé Tits, Stéphane Dupont

Figure 1 for Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
Figure 2 for Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
Figure 3 for Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
Figure 4 for Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
Viaarxiv icon

ICE-Talk: an Interface for a Controllable Expressive Talking Machine

Add code
Bookmark button
Alert button
Aug 25, 2020
Noé Tits, Kevin El Haddad, Thierry Dutoit

Figure 1 for ICE-Talk: an Interface for a Controllable Expressive Talking Machine
Figure 2 for ICE-Talk: an Interface for a Controllable Expressive Talking Machine
Viaarxiv icon

Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning

Add code
Bookmark button
Alert button
Aug 20, 2020
Noé Tits, Kevin El Haddad, Thierry Dutoit

Figure 1 for Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning
Figure 2 for Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning
Figure 3 for Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning
Figure 4 for Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning
Viaarxiv icon

A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis

Add code
Bookmark button
Alert button
Jun 29, 2020
Jean-Benoit Delbrouck, Noé Tits, Mathilde Brousmiche, Stéphane Dupont

Figure 1 for A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
Figure 2 for A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
Figure 3 for A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
Figure 4 for A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
Viaarxiv icon

The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach

Add code
Bookmark button
Alert button
Oct 14, 2019
Noé Tits, Kevin El Haddad, Thierry Dutoit

Figure 1 for The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach
Figure 2 for The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach
Figure 3 for The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach
Figure 4 for The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach
Viaarxiv icon

A Methodology for Controlling the Emotional Expressiveness in Synthetic Speech -- a Deep Learning approach

Add code
Bookmark button
Alert button
Jul 05, 2019
Noé Tits

Figure 1 for A Methodology for Controlling the Emotional Expressiveness in Synthetic Speech -- a Deep Learning approach
Figure 2 for A Methodology for Controlling the Emotional Expressiveness in Synthetic Speech -- a Deep Learning approach
Viaarxiv icon