Alert button
Picture for Mehul Kumar

Mehul Kumar

Alert button

Semi-supervised transfer learning for language expansion of end-to-end speech recognition models to low-resource languages

Add code
Bookmark button
Alert button
Nov 19, 2021
Jiyeon Kim, Mehul Kumar, Dhananjaya Gowda, Abhinav Garg, Chanwoo Kim

Figure 1 for Semi-supervised transfer learning for language expansion of end-to-end speech recognition models to low-resource languages
Figure 2 for Semi-supervised transfer learning for language expansion of end-to-end speech recognition models to low-resource languages
Figure 3 for Semi-supervised transfer learning for language expansion of end-to-end speech recognition models to low-resource languages
Figure 4 for Semi-supervised transfer learning for language expansion of end-to-end speech recognition models to low-resource languages
Viaarxiv icon

A comparison of streaming models and data augmentation methods for robust speech recognition

Add code
Bookmark button
Alert button
Nov 19, 2021
Jiyeon Kim, Mehul Kumar, Dhananjaya Gowda, Abhinav Garg, Chanwoo Kim

Figure 1 for A comparison of streaming models and data augmentation methods for robust speech recognition
Figure 2 for A comparison of streaming models and data augmentation methods for robust speech recognition
Figure 3 for A comparison of streaming models and data augmentation methods for robust speech recognition
Figure 4 for A comparison of streaming models and data augmentation methods for robust speech recognition
Viaarxiv icon

Improved Multi-Stage Training of Online Attention-based Encoder-Decoder Models

Add code
Bookmark button
Alert button
Dec 28, 2019
Abhinav Garg, Dhananjaya Gowda, Ankur Kumar, Kwangyoun Kim, Mehul Kumar, Chanwoo Kim

Figure 1 for Improved Multi-Stage Training of Online Attention-based Encoder-Decoder Models
Figure 2 for Improved Multi-Stage Training of Online Attention-based Encoder-Decoder Models
Figure 3 for Improved Multi-Stage Training of Online Attention-based Encoder-Decoder Models
Figure 4 for Improved Multi-Stage Training of Online Attention-based Encoder-Decoder Models
Viaarxiv icon

power-law nonlinearity with maximally uniform distribution criterion for improved neural network training in automatic speech recognition

Add code
Bookmark button
Alert button
Dec 22, 2019
Chanwoo Kim, Mehul Kumar, Kwangyoun Kim, Dhananjaya Gowda

Figure 1 for power-law nonlinearity with maximally uniform distribution criterion for improved neural network training in automatic speech recognition
Figure 2 for power-law nonlinearity with maximally uniform distribution criterion for improved neural network training in automatic speech recognition
Figure 3 for power-law nonlinearity with maximally uniform distribution criterion for improved neural network training in automatic speech recognition
Figure 4 for power-law nonlinearity with maximally uniform distribution criterion for improved neural network training in automatic speech recognition
Viaarxiv icon

end-to-end training of a large vocabulary end-to-end speech recognition system

Add code
Bookmark button
Alert button
Dec 22, 2019
Chanwoo Kim, Sungsoo Kim, Kwangyoun Kim, Mehul Kumar, Jiyeon Kim, Kyungmin Lee, Changwoo Han, Abhinav Garg, Eunhyang Kim, Minkyoo Shin, Shatrughan Singh, Larry Heck, Dhananjaya Gowda

Figure 1 for end-to-end training of a large vocabulary end-to-end speech recognition system
Figure 2 for end-to-end training of a large vocabulary end-to-end speech recognition system
Figure 3 for end-to-end training of a large vocabulary end-to-end speech recognition system
Figure 4 for end-to-end training of a large vocabulary end-to-end speech recognition system
Viaarxiv icon