Alert button
Picture for Arthur Pimentel

Arthur Pimentel

Alert button

An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning

Add code
Bookmark button
Alert button
Mar 13, 2024
Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Boxing Chen, Tiago H. Falk

Figure 1 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Figure 2 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Figure 3 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Figure 4 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Viaarxiv icon

On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''

Add code
Bookmark button
Alert button
Sep 25, 2023
Arthur Pimentel, Heitor Guimarães, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk

Figure 1 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Figure 2 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Figure 3 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Figure 4 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Viaarxiv icon

VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks

Add code
Bookmark button
Alert button
Sep 22, 2023
Heitor R. Guimarães, Arthur Pimentel, Anderson Avila, Tiago H. Falk

Figure 1 for VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks
Figure 2 for VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks
Figure 3 for VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks
Figure 4 for VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks
Viaarxiv icon

On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications

Add code
Bookmark button
Alert button
May 23, 2023
Vamsikrishna Chemudupati, Marzieh Tahaei, Heitor Guimaraes, Arthur Pimentel, Anderson Avila, Mehdi Rezagholizadeh, Boxing Chen, Tiago Falk

Figure 1 for On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications
Figure 2 for On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications
Figure 3 for On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications
Viaarxiv icon

An Exploration into the Performance of Unsupervised Cross-Task Speech Representations for "In the Wild'' Edge Applications

Add code
Bookmark button
Alert button
May 09, 2023
Heitor Guimarães, Arthur Pimentel, Anderson Avila, Mehdi Rezagholizadeh, Tiago H. Falk

Figure 1 for An Exploration into the Performance of Unsupervised Cross-Task Speech Representations for "In the Wild'' Edge Applications
Viaarxiv icon

RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness

Add code
Bookmark button
Alert button
Feb 23, 2023
Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Boxing Chen, Tiago H. Falk

Figure 1 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Figure 2 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Figure 3 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Figure 4 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Viaarxiv icon

Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement

Add code
Bookmark button
Alert button
Nov 12, 2022
Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk

Figure 1 for Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement
Viaarxiv icon