Alert button
Picture for Llion Jones

Llion Jones

Alert button

Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation

Add code
Bookmark button
Alert button
Oct 18, 2022
Llion Jones, Richard Sproat, Haruko Ishikawa, Alexander Gutkin

Figure 1 for Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation
Figure 2 for Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation
Figure 3 for Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation
Figure 4 for Helpful Neighbors: Leveraging Neighbors in Geographic Feature Pronunciation
Viaarxiv icon

DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement

Add code
Bookmark button
Alert button
Jun 30, 2021
Yuma Koizumi, Shigeki Karita, Scott Wisdom, Hakan Erdogan, John R. Hershey, Llion Jones, Michiel Bacchiani

Figure 1 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Figure 2 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Figure 3 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Figure 4 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Viaarxiv icon

A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition

Add code
Bookmark button
Alert button
Jun 09, 2021
Shigeki Karita, Yotaro Kubo, Michiel Adriaan Unico Bacchiani, Llion Jones

Figure 1 for A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition
Figure 2 for A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition
Figure 3 for A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition
Figure 4 for A Comparative Study on Neural Architectures and Training Methods for Japanese Speech Recognition
Viaarxiv icon

CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing

Add code
Bookmark button
Alert button
Apr 06, 2021
Ahmed Elnaggar, Wei Ding, Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer, Silvia Severini, Florian Matthes, Burkhard Rost

Figure 1 for CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 2 for CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 3 for CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 4 for CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing
Viaarxiv icon

ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing

Add code
Bookmark button
Alert button
Jul 20, 2020
Ahmed Elnaggar, Michael Heinzinger, Christian Dallago, Ghalia Rihawi, Yu Wang, Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer, Martin Steinegger, Debsindhu Bhowmik, Burkhard Rost

Figure 1 for ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 2 for ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 3 for ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Figure 4 for ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Viaarxiv icon

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

Add code
Bookmark button
Alert button
Feb 21, 2019
Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon

Figure 1 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 2 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 3 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Viaarxiv icon

Character-Level Language Modeling with Deeper Self-Attention

Add code
Bookmark button
Alert button
Aug 09, 2018
Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, Llion Jones

Figure 1 for Character-Level Language Modeling with Deeper Self-Attention
Figure 2 for Character-Level Language Modeling with Deeper Self-Attention
Figure 3 for Character-Level Language Modeling with Deeper Self-Attention
Figure 4 for Character-Level Language Modeling with Deeper Self-Attention
Viaarxiv icon

The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation

Add code
Bookmark button
Alert button
Apr 27, 2018
Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Mike Schuster, Zhifeng Chen, Yonghui Wu, Macduff Hughes

Figure 1 for The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
Figure 2 for The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
Figure 3 for The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
Figure 4 for The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
Viaarxiv icon

Tensor2Tensor for Neural Machine Translation

Add code
Bookmark button
Alert button
Mar 16, 2018
Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, Jakob Uszkoreit

Figure 1 for Tensor2Tensor for Neural Machine Translation
Viaarxiv icon