Alert button
Picture for Rui Liu

Rui Liu

Alert button

Explicit Intensity Control for Accented Text-to-speech

Add code
Bookmark button
Alert button
Oct 27, 2022
Rui Liu, Haolin Zuo, De Hu, Guanglai Gao, Haizhou Li

Figure 1 for Explicit Intensity Control for Accented Text-to-speech
Figure 2 for Explicit Intensity Control for Accented Text-to-speech
Figure 3 for Explicit Intensity Control for Accented Text-to-speech
Viaarxiv icon

FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis

Add code
Bookmark button
Alert button
Oct 27, 2022
Yifan Hu, Rui Liu, Guanglai Gao, Haizhou Li

Figure 1 for FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
Figure 2 for FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
Figure 3 for FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
Figure 4 for FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
Viaarxiv icon

Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities

Add code
Bookmark button
Alert button
Oct 27, 2022
Haolin Zuo, Rui Liu, Jinming Zhao, Guanglai Gao, Haizhou Li

Figure 1 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Figure 2 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Figure 3 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Figure 4 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Viaarxiv icon

A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion

Add code
Bookmark button
Alert button
Sep 24, 2022
Muhan Na, Rui Liu, Feilong, Guanglai Gao

Figure 1 for A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion
Figure 2 for A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion
Figure 3 for A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion
Figure 4 for A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion
Viaarxiv icon

MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline

Add code
Bookmark button
Alert button
Sep 22, 2022
Yifan Hu, Pengkai Yin, Rui Liu, Feilong Bao, Guanglai Gao

Figure 1 for MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline
Figure 2 for MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline
Figure 3 for MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline
Figure 4 for MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline
Viaarxiv icon

A Spatial-channel-temporal-fused Attention for Spiking Neural Networks

Add code
Bookmark button
Alert button
Sep 22, 2022
Wuque Cai, Hongze Sun, Rui Liu, Yan Cui, Jun Wang, Yang Xia, Dezhong Yao, Daqing Guo

Figure 1 for A Spatial-channel-temporal-fused Attention for Spiking Neural Networks
Figure 2 for A Spatial-channel-temporal-fused Attention for Spiking Neural Networks
Figure 3 for A Spatial-channel-temporal-fused Attention for Spiking Neural Networks
Figure 4 for A Spatial-channel-temporal-fused Attention for Spiking Neural Networks
Viaarxiv icon

Controllable Accented Text-to-Speech Synthesis

Add code
Bookmark button
Alert button
Sep 22, 2022
Rui Liu, Berrak Sisman, Guanglai Gao, Haizhou Li

Figure 1 for Controllable Accented Text-to-Speech Synthesis
Figure 2 for Controllable Accented Text-to-Speech Synthesis
Figure 3 for Controllable Accented Text-to-Speech Synthesis
Figure 4 for Controllable Accented Text-to-Speech Synthesis
Viaarxiv icon

Generalizable Memory-driven Transformer for Multivariate Long Sequence Time-series Forecasting

Add code
Bookmark button
Alert button
Jul 16, 2022
Mingjie Li, Xiaoyun Zhao, Rui Liu, Changlin Li, Xiaohan Wang, Xiaojun Chang

Figure 1 for Generalizable Memory-driven Transformer for Multivariate Long Sequence Time-series Forecasting
Figure 2 for Generalizable Memory-driven Transformer for Multivariate Long Sequence Time-series Forecasting
Figure 3 for Generalizable Memory-driven Transformer for Multivariate Long Sequence Time-series Forecasting
Figure 4 for Generalizable Memory-driven Transformer for Multivariate Long Sequence Time-series Forecasting
Viaarxiv icon

Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning

Add code
Bookmark button
Alert button
Jun 15, 2022
Rui Liu, Berrak Sisman, Björn Schuller, Guanglai Gao, Haizhou Li

Figure 1 for Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning
Figure 2 for Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning
Figure 3 for Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning
Figure 4 for Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning
Viaarxiv icon

Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers

Add code
Bookmark button
Alert button
May 28, 2022
Rui Liu, Young Jin Kim, Alexandre Muzio, Barzan Mozafari, Hany Hassan Awadalla

Figure 1 for Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
Figure 2 for Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
Figure 3 for Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
Figure 4 for Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
Viaarxiv icon