Alert button
Picture for Nan Du

Nan Du

Alert button

Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers

Add code
Bookmark button
Alert button
Aug 25, 2023
Jiawen Xie, Pengyu Cheng, Xiao Liang, Yong Dai, Nan Du

Figure 1 for Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers
Figure 2 for Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers
Figure 3 for Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers
Figure 4 for Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers
Viaarxiv icon

Brainformers: Trading Simplicity for Efficiency

Add code
Bookmark button
Alert button
May 29, 2023
Yanqi Zhou, Nan Du, Yanping Huang, Daiyi Peng, Chang Lan, Da Huang, Siamak Shakeri, David So, Andrew Dai, Yifeng Lu, Zhifeng Chen, Quoc Le, Claire Cui, James Laundon, Jeff Dean

Figure 1 for Brainformers: Trading Simplicity for Efficiency
Figure 2 for Brainformers: Trading Simplicity for Efficiency
Figure 3 for Brainformers: Trading Simplicity for Efficiency
Figure 4 for Brainformers: Trading Simplicity for Efficiency
Viaarxiv icon

DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining

Add code
Bookmark button
Alert button
May 24, 2023
Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu

Figure 1 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Figure 2 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Figure 3 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Figure 4 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Viaarxiv icon

Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts

Add code
Bookmark button
Alert button
May 24, 2023
Sheng Shen, Le Hou, Yanqi Zhou, Nan Du, Shayne Longpre, Jason Wei, Hyung Won Chung, Barret Zoph, William Fedus, Xinyun Chen, Tu Vu, Yuexin Wu, Wuyang Chen, Albert Webson, Yunxuan Li, Vincent Zhao, Hongkun Yu, Kurt Keutzer, Trevor Darrell, Denny Zhou

Figure 1 for Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
Figure 2 for Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
Figure 3 for Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
Figure 4 for Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
Viaarxiv icon

Lifelong Language Pretraining with Distribution-Specialized Experts

Add code
Bookmark button
Alert button
May 20, 2023
Wuyang Chen, Yanqi Zhou, Nan Du, Yanping Huang, James Laudon, Zhifeng Chen, Claire Cu

Figure 1 for Lifelong Language Pretraining with Distribution-Specialized Experts
Figure 2 for Lifelong Language Pretraining with Distribution-Specialized Experts
Figure 3 for Lifelong Language Pretraining with Distribution-Specialized Experts
Figure 4 for Lifelong Language Pretraining with Distribution-Specialized Experts
Viaarxiv icon

PaLM 2 Technical Report

Add code
Bookmark button
Alert button
May 17, 2023
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, Yonghui Wu

Figure 1 for PaLM 2 Technical Report
Figure 2 for PaLM 2 Technical Report
Figure 3 for PaLM 2 Technical Report
Figure 4 for PaLM 2 Technical Report
Viaarxiv icon

Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference

Add code
Bookmark button
Alert button
Apr 11, 2023
Tao Lei, Junwen Bai, Siddhartha Brahma, Joshua Ainslie, Kenton Lee, Yanqi Zhou, Nan Du, Vincent Y. Zhao, Yuexin Wu, Bo Li, Yu Zhang, Ming-Wei Chang

Figure 1 for Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
Figure 2 for Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
Figure 3 for Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
Figure 4 for Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
Viaarxiv icon

Massively Multilingual Shallow Fusion with Large Language Models

Add code
Bookmark button
Alert button
Feb 17, 2023
Ke Hu, Tara N. Sainath, Bo Li, Nan Du, Yanping Huang, Andrew M. Dai, Yu Zhang, Rodrigo Cabrera, Zhifeng Chen, Trevor Strohman

Figure 1 for Massively Multilingual Shallow Fusion with Large Language Models
Figure 2 for Massively Multilingual Shallow Fusion with Large Language Models
Figure 3 for Massively Multilingual Shallow Fusion with Large Language Models
Figure 4 for Massively Multilingual Shallow Fusion with Large Language Models
Viaarxiv icon

ReAct: Synergizing Reasoning and Acting in Language Models

Add code
Bookmark button
Alert button
Oct 06, 2022
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao

Figure 1 for ReAct: Synergizing Reasoning and Acting in Language Models
Figure 2 for ReAct: Synergizing Reasoning and Acting in Language Models
Figure 3 for ReAct: Synergizing Reasoning and Acting in Language Models
Figure 4 for ReAct: Synergizing Reasoning and Acting in Language Models
Viaarxiv icon

PaLM: Scaling Language Modeling with Pathways

Add code
Bookmark button
Alert button
Apr 19, 2022
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel

Figure 1 for PaLM: Scaling Language Modeling with Pathways
Figure 2 for PaLM: Scaling Language Modeling with Pathways
Figure 3 for PaLM: Scaling Language Modeling with Pathways
Figure 4 for PaLM: Scaling Language Modeling with Pathways
Viaarxiv icon