Alert button
Picture for Shuo Ren

Shuo Ren

Alert button

TESSP: Text-Enhanced Self-Supervised Speech Pre-training

Add code
Bookmark button
Alert button
Nov 24, 2022
Zhuoyuan Yao, Shuo Ren, Sanyuan Chen, Ziyang Ma, Pengcheng Guo, Lei Xie

Figure 1 for TESSP: Text-Enhanced Self-Supervised Speech Pre-training
Figure 2 for TESSP: Text-Enhanced Self-Supervised Speech Pre-training
Figure 3 for TESSP: Text-Enhanced Self-Supervised Speech Pre-training
Figure 4 for TESSP: Text-Enhanced Self-Supervised Speech Pre-training
Viaarxiv icon

RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon Induction

Add code
Bookmark button
Alert button
Oct 18, 2022
Zhoujin Tian, Chaozhuo Li, Shuo Ren, Zhiqiang Zuo, Zengxuan Wen, Xinyue Hu, Xiao Han, Haizhen Huang, Denvy Deng, Qi Zhang, Xing Xie

Figure 1 for RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon Induction
Figure 2 for RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon Induction
Figure 3 for RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon Induction
Figure 4 for RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon Induction
Viaarxiv icon

SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data

Add code
Bookmark button
Alert button
Sep 30, 2022
Ziqiang Zhang, Sanyuan Chen, Long Zhou, Yu Wu, Shuo Ren, Shujie Liu, Zhuoyuan Yao, Xun Gong, Lirong Dai, Jinyu Li, Furu Wei

Figure 1 for SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data
Figure 2 for SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data
Figure 3 for SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data
Figure 4 for SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data
Viaarxiv icon

Speech Pre-training with Acoustic Piece

Add code
Bookmark button
Alert button
Apr 07, 2022
Shuo Ren, Shujie Liu, Yu Wu, Long Zhou, Furu Wei

Figure 1 for Speech Pre-training with Acoustic Piece
Figure 2 for Speech Pre-training with Acoustic Piece
Figure 3 for Speech Pre-training with Acoustic Piece
Figure 4 for Speech Pre-training with Acoustic Piece
Viaarxiv icon

KESA: A Knowledge Enhanced Approach For Sentiment Analysis

Add code
Bookmark button
Alert button
Feb 24, 2022
Qinghua Zhao, Shuai Ma, Shuo Ren

Figure 1 for KESA: A Knowledge Enhanced Approach For Sentiment Analysis
Figure 2 for KESA: A Knowledge Enhanced Approach For Sentiment Analysis
Figure 3 for KESA: A Knowledge Enhanced Approach For Sentiment Analysis
Figure 4 for KESA: A Knowledge Enhanced Approach For Sentiment Analysis
Viaarxiv icon

WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing

Add code
Bookmark button
Alert button
Oct 29, 2021
Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei

Figure 1 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 2 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 3 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 4 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Viaarxiv icon

Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding

Add code
Bookmark button
Alert button
Oct 23, 2021
Wei Wang, Shuo Ren, Yao Qian, Shujie Liu, Yu Shi, Yanmin Qian, Michael Zeng

Figure 1 for Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding
Figure 2 for Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding
Figure 3 for Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding
Figure 4 for Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding
Viaarxiv icon

SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing

Add code
Bookmark button
Alert button
Oct 14, 2021
Junyi Ao, Rui Wang, Long Zhou, Shujie Liu, Shuo Ren, Yu Wu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei

Figure 1 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 2 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 3 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 4 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Viaarxiv icon

CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation

Add code
Bookmark button
Alert button
Feb 09, 2021
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, Shujie Liu

Figure 1 for CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Figure 2 for CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Figure 3 for CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Figure 4 for CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Viaarxiv icon

GraphCodeBERT: Pre-training Code Representations with Data Flow

Add code
Bookmark button
Alert button
Sep 29, 2020
Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, Ming Zhou

Figure 1 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Figure 2 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Figure 3 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Figure 4 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Viaarxiv icon