Alert button

"speech": models, code, and papers
Alert button

Learning Phone Recognition from Unpaired Audio and Phone Sequences Based on Generative Adversarial Network

Jul 29, 2022
Da-rong Liu, Po-chun Hsu, Yi-chen Chen, Sung-feng Huang, Shun-po Chuang, Da-yi Wu, Hung-yi Lee

Figure 1 for Learning Phone Recognition from Unpaired Audio and Phone Sequences Based on Generative Adversarial Network
Figure 2 for Learning Phone Recognition from Unpaired Audio and Phone Sequences Based on Generative Adversarial Network
Figure 3 for Learning Phone Recognition from Unpaired Audio and Phone Sequences Based on Generative Adversarial Network
Figure 4 for Learning Phone Recognition from Unpaired Audio and Phone Sequences Based on Generative Adversarial Network
Viaarxiv icon

Reducing Target Group Bias in Hate Speech Detectors

Dec 07, 2021
Darsh J Shah, Sinong Wang, Han Fang, Hao Ma, Luke Zettlemoyer

Figure 1 for Reducing Target Group Bias in Hate Speech Detectors
Figure 2 for Reducing Target Group Bias in Hate Speech Detectors
Figure 3 for Reducing Target Group Bias in Hate Speech Detectors
Figure 4 for Reducing Target Group Bias in Hate Speech Detectors
Viaarxiv icon

Unsupervised low-rank representations for speech emotion recognition

Add code
Bookmark button
Alert button
Apr 14, 2021
Georgios Paraskevopoulos, Efthymios Tzinis, Nikolaos Ellinas, Theodoros Giannakopoulos, Alexandros Potamianos

Figure 1 for Unsupervised low-rank representations for speech emotion recognition
Figure 2 for Unsupervised low-rank representations for speech emotion recognition
Figure 3 for Unsupervised low-rank representations for speech emotion recognition
Figure 4 for Unsupervised low-rank representations for speech emotion recognition
Viaarxiv icon

An Improved Model for Voicing Silent Speech

Add code
Bookmark button
Alert button
Jun 03, 2021
David Gaddy, Dan Klein

Figure 1 for An Improved Model for Voicing Silent Speech
Figure 2 for An Improved Model for Voicing Silent Speech
Figure 3 for An Improved Model for Voicing Silent Speech
Figure 4 for An Improved Model for Voicing Silent Speech
Viaarxiv icon

Codes, Patterns and Shapes of Contemporary Online Antisemitism and Conspiracy Narratives -- an Annotation Guide and Labeled German-Language Dataset in the Context of COVID-19

Oct 13, 2022
Elisabeth Steffen, Helena Mihaljević, Milena Pustet, Nyco Bischoff, María do Mar Castro Varela, Yener Bayramoğlu, Bahar Oghalai

Figure 1 for Codes, Patterns and Shapes of Contemporary Online Antisemitism and Conspiracy Narratives -- an Annotation Guide and Labeled German-Language Dataset in the Context of COVID-19
Viaarxiv icon

Incorporating Multi-Target in Multi-Stage Speech Enhancement Model for Better Generalization

Jul 09, 2021
Lu Zhang, Mingjiang Wang, Andong Li, Zehua Zhang, Xuyi Zhuang

Figure 1 for Incorporating Multi-Target in Multi-Stage Speech Enhancement Model for Better Generalization
Figure 2 for Incorporating Multi-Target in Multi-Stage Speech Enhancement Model for Better Generalization
Figure 3 for Incorporating Multi-Target in Multi-Stage Speech Enhancement Model for Better Generalization
Figure 4 for Incorporating Multi-Target in Multi-Stage Speech Enhancement Model for Better Generalization
Viaarxiv icon

KeypartX: Graph-based Perception (Text) Representation

Add code
Bookmark button
Alert button
Sep 23, 2022
Peng Yang

Figure 1 for KeypartX: Graph-based Perception (Text) Representation
Figure 2 for KeypartX: Graph-based Perception (Text) Representation
Figure 3 for KeypartX: Graph-based Perception (Text) Representation
Figure 4 for KeypartX: Graph-based Perception (Text) Representation
Viaarxiv icon

Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser

Add code
Bookmark button
Alert button
Apr 08, 2022
Sonal Joshi, Saurabh Kataria, Yiwen Shao, Piotr Zelasko, Jesus Villalba, Sanjeev Khudanpur, Najim Dehak

Figure 1 for Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser
Figure 2 for Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser
Figure 3 for Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser
Figure 4 for Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser
Viaarxiv icon

Speech Enhancement by Noise Self-Supervised Rank-Constrained Spatial Covariance Matrix Estimation via Independent Deeply Learned Matrix Analysis

Sep 10, 2021
Sota Misawa, Norihiro Takamune, Tomohiko Nakamura, Daichi Kitamura, Hiroshi Saruwatari, Masakazu Une, Shoji Makino

Figure 1 for Speech Enhancement by Noise Self-Supervised Rank-Constrained Spatial Covariance Matrix Estimation via Independent Deeply Learned Matrix Analysis
Figure 2 for Speech Enhancement by Noise Self-Supervised Rank-Constrained Spatial Covariance Matrix Estimation via Independent Deeply Learned Matrix Analysis
Figure 3 for Speech Enhancement by Noise Self-Supervised Rank-Constrained Spatial Covariance Matrix Estimation via Independent Deeply Learned Matrix Analysis
Figure 4 for Speech Enhancement by Noise Self-Supervised Rank-Constrained Spatial Covariance Matrix Estimation via Independent Deeply Learned Matrix Analysis
Viaarxiv icon

Does a PESQNet (Loss) Require a Clean Reference Input? The Original PESQ Does, But ACR Listening Tests Don't

May 13, 2022
Ziyi Xu, Maximilian Strake, Tim Fingscheidt

Figure 1 for Does a PESQNet (Loss) Require a Clean Reference Input? The Original PESQ Does, But ACR Listening Tests Don't
Figure 2 for Does a PESQNet (Loss) Require a Clean Reference Input? The Original PESQ Does, But ACR Listening Tests Don't
Figure 3 for Does a PESQNet (Loss) Require a Clean Reference Input? The Original PESQ Does, But ACR Listening Tests Don't
Figure 4 for Does a PESQNet (Loss) Require a Clean Reference Input? The Original PESQ Does, But ACR Listening Tests Don't
Viaarxiv icon