Alert button
Picture for Akiko Eriguchi

Akiko Eriguchi

Alert button

Building Multilingual Machine Translation Systems That Serve Arbitrary X-Y Translations

Add code
Bookmark button
Alert button
Jun 30, 2022
Akiko Eriguchi, Shufang Xie, Tao Qin, Hany Hassan Awadalla

Figure 1 for Building Multilingual Machine Translation Systems That Serve Arbitrary X-Y Translations
Figure 2 for Building Multilingual Machine Translation Systems That Serve Arbitrary X-Y Translations
Figure 3 for Building Multilingual Machine Translation Systems That Serve Arbitrary X-Y Translations
Figure 4 for Building Multilingual Machine Translation Systems That Serve Arbitrary X-Y Translations
Viaarxiv icon

Improving Multilingual Translation by Representation and Gradient Regularization

Add code
Bookmark button
Alert button
Sep 10, 2021
Yilin Yang, Akiko Eriguchi, Alexandre Muzio, Prasad Tadepalli, Stefan Lee, Hany Hassan

Figure 1 for Improving Multilingual Translation by Representation and Gradient Regularization
Figure 2 for Improving Multilingual Translation by Representation and Gradient Regularization
Figure 3 for Improving Multilingual Translation by Representation and Gradient Regularization
Figure 4 for Improving Multilingual Translation by Representation and Gradient Regularization
Viaarxiv icon

XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders

Add code
Bookmark button
Alert button
Dec 31, 2020
Shuming Ma, Jian Yang, Haoyang Huang, Zewen Chi, Li Dong, Dongdong Zhang, Hany Hassan Awadalla, Alexandre Muzio, Akiko Eriguchi, Saksham Singhal, Xia Song, Arul Menezes, Furu Wei

Figure 1 for XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders
Figure 2 for XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders
Figure 3 for XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders
Figure 4 for XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders
Viaarxiv icon

Neural Text Generation with Artificial Negative Examples

Add code
Bookmark button
Alert button
Dec 28, 2020
Keisuke Shirai, Kazuma Hashimoto, Akiko Eriguchi, Takashi Ninomiya, Shinsuke Mori

Figure 1 for Neural Text Generation with Artificial Negative Examples
Figure 2 for Neural Text Generation with Artificial Negative Examples
Figure 3 for Neural Text Generation with Artificial Negative Examples
Figure 4 for Neural Text Generation with Artificial Negative Examples
Viaarxiv icon

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

Add code
Bookmark button
Alert button
Feb 21, 2019
Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon

Figure 1 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 2 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 3 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Viaarxiv icon

Multilingual Extractive Reading Comprehension by Runtime Machine Translation

Add code
Bookmark button
Alert button
Nov 02, 2018
Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka

Figure 1 for Multilingual Extractive Reading Comprehension by Runtime Machine Translation
Figure 2 for Multilingual Extractive Reading Comprehension by Runtime Machine Translation
Figure 3 for Multilingual Extractive Reading Comprehension by Runtime Machine Translation
Figure 4 for Multilingual Extractive Reading Comprehension by Runtime Machine Translation
Viaarxiv icon

Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation

Add code
Bookmark button
Alert button
Sep 12, 2018
Akiko Eriguchi, Melvin Johnson, Orhan Firat, Hideto Kazawa, Wolfgang Macherey

Figure 1 for Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
Figure 2 for Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
Figure 3 for Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
Figure 4 for Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
Viaarxiv icon

Learning to Parse and Translate Improves Neural Machine Translation

Add code
Bookmark button
Alert button
Apr 23, 2017
Akiko Eriguchi, Yoshimasa Tsuruoka, Kyunghyun Cho

Figure 1 for Learning to Parse and Translate Improves Neural Machine Translation
Figure 2 for Learning to Parse and Translate Improves Neural Machine Translation
Figure 3 for Learning to Parse and Translate Improves Neural Machine Translation
Figure 4 for Learning to Parse and Translate Improves Neural Machine Translation
Viaarxiv icon

Tree-to-Sequence Attentional Neural Machine Translation

Add code
Bookmark button
Alert button
Jun 08, 2016
Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka

Figure 1 for Tree-to-Sequence Attentional Neural Machine Translation
Figure 2 for Tree-to-Sequence Attentional Neural Machine Translation
Figure 3 for Tree-to-Sequence Attentional Neural Machine Translation
Figure 4 for Tree-to-Sequence Attentional Neural Machine Translation
Viaarxiv icon