Picture for Arthur Pimentel

Arthur Pimentel

An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning

Add code
Mar 13, 2024
Figure 1 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Figure 2 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Figure 3 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Figure 4 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Viaarxiv icon

On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''

Add code
Sep 25, 2023
Figure 1 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Figure 2 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Figure 3 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Figure 4 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Viaarxiv icon

VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks

Add code
Sep 22, 2023
Figure 1 for VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks
Figure 2 for VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks
Figure 3 for VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks
Figure 4 for VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks
Viaarxiv icon

On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications

Add code
May 23, 2023
Viaarxiv icon

An Exploration into the Performance of Unsupervised Cross-Task Speech Representations for "In the Wild'' Edge Applications

Add code
May 09, 2023
Figure 1 for An Exploration into the Performance of Unsupervised Cross-Task Speech Representations for "In the Wild'' Edge Applications
Viaarxiv icon

RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness

Add code
Feb 23, 2023
Figure 1 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Figure 2 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Figure 3 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Figure 4 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Viaarxiv icon

Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement

Add code
Nov 12, 2022
Viaarxiv icon