Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

Context-aware Fine-tuning of Self-supervised Speech Models


Dec 16, 2022
Suwon Shon, Felix Wu, Kwangyoun Kim, Prashant Sridhar, Karen Livescu, Shinji Watanabe

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

E-Branchformer: Branchformer with Enhanced merging for speech recognition


Sep 30, 2022
Kwangyoun Kim, Felix Wu, Yifan Peng, Jing Pan, Prashant Sridhar, Kyu J. Han, Shinji Watanabe

Add code

* Accepted to SLT 2022 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages


May 02, 2022
Felix Wu, Kwangyoun Kim, Shinji Watanabe, Kyu Han, Ryan McDonald, Kilian Q. Weinberger, Yoav Artzi

Add code

* Code available at https://github.com/asappresearch/wav2seq 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

SRU++: Pioneering Fast Recurrence with Attention for Speech Recognition


Oct 11, 2021
Jing Pan, Tao Lei, Kwangyoun Kim, Kyu Han, Shinji Watanabe

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition


Sep 14, 2021
Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi

Add code

* Code available at https://github.com/asappresearch/sew 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Multi-mode Transformer Transducer with Stochastic Future Context


Jun 17, 2021
Kwangyoun Kim, Felix Wu, Prashant Sridhar, Kyu J. Han, Shinji Watanabe

Add code

* Accepted to Interspeech 2021 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Sequential Routing Framework: Fully Capsule Network-based Speech Recognition


Jul 23, 2020
Kyungmin Lee, Hyunwhan Joe, Hyeontaek Lim, Kwangyoun Kim, Sungsoo Kim, Chang Woo Han, Hong-Gee Kim

Add code

* 40 pages, 7 figures (totally 10 figures), submitted to Computer Speech and Language (Only line numbers were removed from the submitted version) 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Small energy masking for improved neural network training for end-to-end speech recognition


Feb 15, 2020
Chanwoo Kim, Kwangyoun Kim, Sathish Reddy Indurthi

Add code

* Accepted at ICASSP 2020 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Attention based on-device streaming speech recognition with large speech corpus


Jan 02, 2020
Kwangyoun Kim, Kyungmin Lee, Dhananjaya Gowda, Junmo Park, Sungsoo Kim, Sichen Jin, Young-Yoon Lee, Jinsu Yeo, Daehyun Kim, Seokyeong Jung, Jungin Lee, Myoungji Han, Chanwoo Kim

Add code

* Accepted and presented at the ASRU 2019 conference 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Improved Multi-Stage Training of Online Attention-based Encoder-Decoder Models


Dec 28, 2019
Abhinav Garg, Dhananjaya Gowda, Ankur Kumar, Kwangyoun Kim, Mehul Kumar, Chanwoo Kim

Add code

* Accepted and presented at the ASRU 2019 conference 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email
1
2
>>