Alert button
Picture for Shucong Zhang

Shucong Zhang

Alert button

CARE: Large Precision Matrix Estimation for Compositional Data

Sep 13, 2023
Shucong Zhang, Huiyuan Wang, Wei Lin

Figure 1 for CARE: Large Precision Matrix Estimation for Compositional Data
Figure 2 for CARE: Large Precision Matrix Estimation for Compositional Data
Figure 3 for CARE: Large Precision Matrix Estimation for Compositional Data
Figure 4 for CARE: Large Precision Matrix Estimation for Compositional Data
Viaarxiv icon

LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech

Sep 11, 2023
Titouan Parcollet, Ha Nguyen, Solene Evain, Marcely Zanon Boito, Adrien Pupier, Salima Mdhaffar, Hang Le, Sina Alisamir, Natalia Tomashenko, Marco Dinarelli, Shucong Zhang, Alexandre Allauzen, Maximin Coavoux, Yannick Esteve, Mickael Rouvier, Jerome Goulian, Benjamin Lecouteux, Francois Portet, Solange Rossato, Fabien Ringeval, Didier Schwab, Laurent Besacier

Figure 1 for LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
Figure 2 for LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
Figure 3 for LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
Figure 4 for LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
Viaarxiv icon

Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition

Jul 12, 2023
Titouan Parcollet, Rogier van Dalen, Shucong Zhang, Sourav Bhattacharya

Figure 1 for Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition
Figure 2 for Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition
Figure 3 for Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition
Figure 4 for Sumformer: A Linear-Complexity Alternative to Self-Attention for Speech Recognition
Viaarxiv icon

Cross-Attention is all you need: Real-Time Streaming Transformers for Personalised Speech Enhancement

Nov 08, 2022
Shucong Zhang, Malcolm Chadwick, Alberto Gil C. P. Ramos, Sourav Bhattacharya

Figure 1 for Cross-Attention is all you need: Real-Time Streaming Transformers for Personalised Speech Enhancement
Figure 2 for Cross-Attention is all you need: Real-Time Streaming Transformers for Personalised Speech Enhancement
Viaarxiv icon

Transformer-based Streaming ASR with Cumulative Attention

Mar 11, 2022
Mohan Li, Shucong Zhang, Catalin Zorila, Rama Doddipatla

Figure 1 for Transformer-based Streaming ASR with Cumulative Attention
Figure 2 for Transformer-based Streaming ASR with Cumulative Attention
Figure 3 for Transformer-based Streaming ASR with Cumulative Attention
Figure 4 for Transformer-based Streaming ASR with Cumulative Attention
Viaarxiv icon

Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers

Feb 09, 2021
Shucong Zhang, Cong-Thanh Do, Rama Doddipatla, Erfan Loweimi, Peter Bell, Steve Renals

Figure 1 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Figure 2 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Figure 3 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Figure 4 for Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Viaarxiv icon

On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers

Nov 08, 2020
Shucong Zhang, Erfan Loweimi, Peter Bell, Steve Renals

Figure 1 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Figure 2 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Figure 3 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Figure 4 for On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers
Viaarxiv icon

Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers

Nov 08, 2020
Shucong Zhang, Erfan Loweimi, Peter Bell, Steve Renals

Figure 1 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Figure 2 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Figure 3 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Figure 4 for Stochastic Attention Head Removal: A Simple and Effective Method for Improving Automatic Speech Recognition with Transformers
Viaarxiv icon

When Can Self-Attention Be Replaced by Feed Forward Layers?

May 28, 2020
Shucong Zhang, Erfan Loweimi, Peter Bell, Steve Renals

Figure 1 for When Can Self-Attention Be Replaced by Feed Forward Layers?
Figure 2 for When Can Self-Attention Be Replaced by Feed Forward Layers?
Figure 3 for When Can Self-Attention Be Replaced by Feed Forward Layers?
Figure 4 for When Can Self-Attention Be Replaced by Feed Forward Layers?
Viaarxiv icon