Alert button

"speech": models, code, and papers
Alert button

An Experimental Investigation of Part-Of-Speech Taggers for Vietnamese

Add code
Bookmark button
Alert button
Jun 14, 2022
Tuan-Phong Nguyen, Quoc-Tuan Truong, Xuan-Nam Nguyen, Anh-Cuong Le

Figure 1 for An Experimental Investigation of Part-Of-Speech Taggers for Vietnamese
Figure 2 for An Experimental Investigation of Part-Of-Speech Taggers for Vietnamese
Figure 3 for An Experimental Investigation of Part-Of-Speech Taggers for Vietnamese
Figure 4 for An Experimental Investigation of Part-Of-Speech Taggers for Vietnamese
Viaarxiv icon

InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis

Dec 20, 2022
Feng Qiu, Wanzeng Kong, Yu Ding

Figure 1 for InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis
Figure 2 for InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis
Figure 3 for InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis
Figure 4 for InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis
Viaarxiv icon

Layer-wise Fast Adaptation for End-to-End Multi-Accent Speech Recognition

Apr 21, 2022
Xun Gong, Yizhou Lu, Zhikai Zhou, Yanmin Qian

Figure 1 for Layer-wise Fast Adaptation for End-to-End Multi-Accent Speech Recognition
Figure 2 for Layer-wise Fast Adaptation for End-to-End Multi-Accent Speech Recognition
Figure 3 for Layer-wise Fast Adaptation for End-to-End Multi-Accent Speech Recognition
Figure 4 for Layer-wise Fast Adaptation for End-to-End Multi-Accent Speech Recognition
Viaarxiv icon

A Speech Intelligibility Enhancement Model based on Canonical Correlation and Deep Learning for Hearing-Assistive Technologies

Feb 15, 2022
Tassadaq Hussain, Muhammad Diyan, Mandar Gogate, Kia Dashtipour, Ahsan Adeel, Yu Tsao, Amir Hussain

Figure 1 for A Speech Intelligibility Enhancement Model based on Canonical Correlation and Deep Learning for Hearing-Assistive Technologies
Figure 2 for A Speech Intelligibility Enhancement Model based on Canonical Correlation and Deep Learning for Hearing-Assistive Technologies
Figure 3 for A Speech Intelligibility Enhancement Model based on Canonical Correlation and Deep Learning for Hearing-Assistive Technologies
Figure 4 for A Speech Intelligibility Enhancement Model based on Canonical Correlation and Deep Learning for Hearing-Assistive Technologies
Viaarxiv icon

Automated Sex Classification of Children's Voices and Changes in Differentiating Factors with Age

Add code
Bookmark button
Alert button
Sep 27, 2022
Fuling Chen, Roberto Togneri, Murray Maybery, Diana Weiting Tan

Figure 1 for Automated Sex Classification of Children's Voices and Changes in Differentiating Factors with Age
Figure 2 for Automated Sex Classification of Children's Voices and Changes in Differentiating Factors with Age
Figure 3 for Automated Sex Classification of Children's Voices and Changes in Differentiating Factors with Age
Figure 4 for Automated Sex Classification of Children's Voices and Changes in Differentiating Factors with Age
Viaarxiv icon

Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition

Add code
Bookmark button
Alert button
Jul 02, 2022
Guangzhi Sun, Chao Zhang, Philip C. Woodland

Figure 1 for Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition
Figure 2 for Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition
Figure 3 for Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition
Figure 4 for Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition
Viaarxiv icon

Selecting and combining complementary feature representations and classifiers for hate speech detection

Add code
Bookmark button
Alert button
Jan 18, 2022
Rafael M. O. Cruz, Woshington V. de Sousa, George D. C. Cavalcanti

Figure 1 for Selecting and combining complementary feature representations and classifiers for hate speech detection
Figure 2 for Selecting and combining complementary feature representations and classifiers for hate speech detection
Figure 3 for Selecting and combining complementary feature representations and classifiers for hate speech detection
Figure 4 for Selecting and combining complementary feature representations and classifiers for hate speech detection
Viaarxiv icon

Hate Speech Classifiers Learn Human-Like Social Stereotypes

Oct 28, 2021
Aida Mostafazadeh Davani, Mohammad Atari, Brendan Kennedy, Morteza Dehghani

Figure 1 for Hate Speech Classifiers Learn Human-Like Social Stereotypes
Figure 2 for Hate Speech Classifiers Learn Human-Like Social Stereotypes
Figure 3 for Hate Speech Classifiers Learn Human-Like Social Stereotypes
Figure 4 for Hate Speech Classifiers Learn Human-Like Social Stereotypes
Viaarxiv icon

Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation

Add code
Bookmark button
Alert button
Apr 12, 2022
Wenjing Zhu, Xiang Li

Figure 1 for Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation
Figure 2 for Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation
Figure 3 for Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation
Figure 4 for Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation
Viaarxiv icon

Audio-visual speech separation based on joint feature representation with cross-modal attention

Mar 05, 2022
Junwen Xiong, Peng Zhang, Lei Xie, Wei Huang, Yufei Zha, Yanning Zhang

Figure 1 for Audio-visual speech separation based on joint feature representation with cross-modal attention
Figure 2 for Audio-visual speech separation based on joint feature representation with cross-modal attention
Figure 3 for Audio-visual speech separation based on joint feature representation with cross-modal attention
Figure 4 for Audio-visual speech separation based on joint feature representation with cross-modal attention
Viaarxiv icon