Picture for Shucong Zhang

Shucong Zhang

Linear-Complexity Self-Supervised Learning for Speech Processing

Add code
Jul 18, 2024
Viaarxiv icon

Open-Source Conversational AI with SpeechBrain 1.0

Add code
Jul 02, 2024
Figure 1 for Open-Source Conversational AI with SpeechBrain 1.0
Figure 2 for Open-Source Conversational AI with SpeechBrain 1.0
Viaarxiv icon

CARE: Large Precision Matrix Estimation for Compositional Data

Add code
Sep 13, 2023
Figure 1 for CARE: Large Precision Matrix Estimation for Compositional Data
Figure 2 for CARE: Large Precision Matrix Estimation for Compositional Data
Figure 3 for CARE: Large Precision Matrix Estimation for Compositional Data
Figure 4 for CARE: Large Precision Matrix Estimation for Compositional Data
Viaarxiv icon

LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech

Add code
Sep 11, 2023
Figure 1 for LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
Figure 2 for LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
Figure 3 for LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
Figure 4 for LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
Viaarxiv icon

Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition

Add code
Jul 12, 2023
Figure 1 for Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition
Figure 2 for Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition
Figure 3 for Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition
Figure 4 for Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition
Viaarxiv icon

Cross-Attention is all you need: Real-Time Streaming Transformers for Personalised Speech Enhancement

Add code
Nov 08, 2022
Figure 1 for Cross-Attention is all you need: Real-Time Streaming Transformers for Personalised Speech Enhancement
Figure 2 for Cross-Attention is all you need: Real-Time Streaming Transformers for Personalised Speech Enhancement
Viaarxiv icon

Transformer-based Streaming ASR with Cumulative Attention

Add code
Mar 11, 2022
Figure 1 for Transformer-based Streaming ASR with Cumulative Attention
Figure 2 for Transformer-based Streaming ASR with Cumulative Attention
Figure 3 for Transformer-based Streaming ASR with Cumulative Attention
Figure 4 for Transformer-based Streaming ASR with Cumulative Attention
Viaarxiv icon

Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers

Add code
Feb 09, 2021
Figure 1 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Figure 2 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Figure 3 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Figure 4 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Viaarxiv icon

On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers

Add code
Nov 08, 2020
Figure 1 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Figure 2 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Figure 3 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Figure 4 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Viaarxiv icon

Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers

Add code
Nov 08, 2020
Figure 1 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Figure 2 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Figure 3 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Figure 4 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Viaarxiv icon