Alert button
Picture for Guanglai Gao

Guanglai Gao

Alert button

Mitigating Heterogeneity among Factor Tensors via Lie Group Manifolds for Tensor Decomposition Based Temporal Knowledge Graph Embedding

Add code
Bookmark button
Alert button
Apr 14, 2024
Jiang Li, Xiangdong Su, Yeyun Gong, Guanglai Gao

Viaarxiv icon

L$^2$GC: Lorentzian Linear Graph Convolutional Networks For Node Classification

Add code
Bookmark button
Alert button
Mar 10, 2024
Qiuyu Liang, Weihua Wang, Feilong Bao, Guanglai Gao

Figure 1 for L$^2$GC: Lorentzian Linear Graph Convolutional Networks For Node Classification
Figure 2 for L$^2$GC: Lorentzian Linear Graph Convolutional Networks For Node Classification
Figure 3 for L$^2$GC: Lorentzian Linear Graph Convolutional Networks For Node Classification
Figure 4 for L$^2$GC: Lorentzian Linear Graph Convolutional Networks For Node Classification
Viaarxiv icon

Betray Oneself: A Novel Audio DeepFake Detection Model via Mono-to-Stereo Conversion

Add code
Bookmark button
Alert button
May 25, 2023
Rui Liu, Jinhua Zhang, Guanglai Gao, Haizhou Li

Figure 1 for Betray Oneself: A Novel Audio DeepFake Detection Model via Mono-to-Stereo Conversion
Figure 2 for Betray Oneself: A Novel Audio DeepFake Detection Model via Mono-to-Stereo Conversion
Figure 3 for Betray Oneself: A Novel Audio DeepFake Detection Model via Mono-to-Stereo Conversion
Figure 4 for Betray Oneself: A Novel Audio DeepFake Detection Model via Mono-to-Stereo Conversion
Viaarxiv icon

Explicit Intensity Control for Accented Text-to-speech

Add code
Bookmark button
Alert button
Oct 27, 2022
Rui Liu, Haolin Zuo, De Hu, Guanglai Gao, Haizhou Li

Figure 1 for Explicit Intensity Control for Accented Text-to-speech
Figure 2 for Explicit Intensity Control for Accented Text-to-speech
Figure 3 for Explicit Intensity Control for Accented Text-to-speech
Viaarxiv icon

FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis

Add code
Bookmark button
Alert button
Oct 27, 2022
Yifan Hu, Rui Liu, Guanglai Gao, Haizhou Li

Figure 1 for FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
Figure 2 for FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
Figure 3 for FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
Figure 4 for FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
Viaarxiv icon

Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities

Add code
Bookmark button
Alert button
Oct 27, 2022
Haolin Zuo, Rui Liu, Jinming Zhao, Guanglai Gao, Haizhou Li

Figure 1 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Figure 2 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Figure 3 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Figure 4 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Viaarxiv icon

A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion

Add code
Bookmark button
Alert button
Sep 24, 2022
Muhan Na, Rui Liu, Feilong, Guanglai Gao

Figure 1 for A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion
Figure 2 for A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion
Figure 3 for A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion
Figure 4 for A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion
Viaarxiv icon

MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline

Add code
Bookmark button
Alert button
Sep 22, 2022
Yifan Hu, Pengkai Yin, Rui Liu, Feilong Bao, Guanglai Gao

Figure 1 for MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline
Figure 2 for MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline
Figure 3 for MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline
Figure 4 for MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline
Viaarxiv icon

Controllable Accented Text-to-Speech Synthesis

Add code
Bookmark button
Alert button
Sep 22, 2022
Rui Liu, Berrak Sisman, Guanglai Gao, Haizhou Li

Figure 1 for Controllable Accented Text-to-Speech Synthesis
Figure 2 for Controllable Accented Text-to-Speech Synthesis
Figure 3 for Controllable Accented Text-to-Speech Synthesis
Figure 4 for Controllable Accented Text-to-Speech Synthesis
Viaarxiv icon

Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning

Add code
Bookmark button
Alert button
Jun 15, 2022
Rui Liu, Berrak Sisman, Björn Schuller, Guanglai Gao, Haizhou Li

Figure 1 for Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning
Figure 2 for Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning
Figure 3 for Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning
Figure 4 for Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning
Viaarxiv icon