Alert button
Picture for Sangha Kim

Sangha Kim

Alert button

Label-Free Multi-Domain Machine Translation with Stage-wise Training

Add code
Bookmark button
Alert button
May 06, 2023
Fan Zhang, Mei Tu, Sangha Kim, Song Liu, Jinyao Yan

Figure 1 for Label-Free Multi-Domain Machine Translation with Stage-wise Training
Figure 2 for Label-Free Multi-Domain Machine Translation with Stage-wise Training
Figure 3 for Label-Free Multi-Domain Machine Translation with Stage-wise Training
Figure 4 for Label-Free Multi-Domain Machine Translation with Stage-wise Training
Viaarxiv icon

Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement

Add code
Bookmark button
Alert button
Oct 18, 2021
HyoJung Han, Seokchan Ahn, Yoonjung Choi, Insoo Chung, Sangha Kim, Kyunghyun Cho

Figure 1 for Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement
Figure 2 for Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement
Figure 3 for Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement
Figure 4 for Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement
Viaarxiv icon

Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems

Add code
Bookmark button
Alert button
Oct 13, 2021
Mohd Abbas Zaidi, Beomseok Lee, Nikhil Kumar Lakumarapu, Sangha Kim, Chanwoo Kim

Figure 1 for Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems
Figure 2 for Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems
Figure 3 for Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems
Figure 4 for Decision Attentive Regularization to Improve Simultaneous Speech Translation Systems
Viaarxiv icon

Infusing Future Information into Monotonic Attention Through Language Models

Add code
Bookmark button
Alert button
Sep 07, 2021
Mohd Abbas Zaidi, Sathish Indurthi, Beomseok Lee, Nikhil Kumar Lakumarapu, Sangha Kim

Figure 1 for Infusing Future Information into Monotonic Attention Through Language Models
Figure 2 for Infusing Future Information into Monotonic Attention Through Language Models
Figure 3 for Infusing Future Information into Monotonic Attention Through Language Models
Figure 4 for Infusing Future Information into Monotonic Attention Through Language Models
Viaarxiv icon

Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation

Add code
Bookmark button
Alert button
Dec 29, 2020
Hyojung Han, Sathish Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Sangha Kim, Chanwoo Kim, Inchul Hwang

Figure 1 for Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation
Figure 2 for Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation
Figure 3 for Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation
Figure 4 for Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation
Viaarxiv icon

Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation

Add code
Bookmark button
Alert button
Oct 13, 2020
Insoo Chung, Byeongwook Kim, Yoonjung Choi, Se Jung Kwon, Yongkweon Jeon, Baeseong Park, Sangha Kim, Dongsoo Lee

Figure 1 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Figure 2 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Figure 3 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Figure 4 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Viaarxiv icon

Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning

Add code
Bookmark button
Alert button
Nov 11, 2019
Sathish Indurthi, Houjeung Han, Nikhil Kumar Lakumarapu, Beomseok Lee, Insoo Chung, Sangha Kim, Chanwoo Kim

Figure 1 for Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning
Figure 2 for Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning
Figure 3 for Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning
Figure 4 for Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning
Viaarxiv icon