Alert button
Picture for David R. So

David R. So

Alert button

PaLM 2 Technical Report

May 17, 2023
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, Yonghui Wu

Figure 1 for PaLM 2 Technical Report
Figure 2 for PaLM 2 Technical Report
Figure 3 for PaLM 2 Technical Report
Figure 4 for PaLM 2 Technical Report
Viaarxiv icon

EvoPrompting: Language Models for Code-Level Neural Architecture Search

Feb 28, 2023
Angelica Chen, David M. Dohan, David R. So

Figure 1 for EvoPrompting: Language Models for Code-Level Neural Architecture Search
Figure 2 for EvoPrompting: Language Models for Code-Level Neural Architecture Search
Figure 3 for EvoPrompting: Language Models for Code-Level Neural Architecture Search
Figure 4 for EvoPrompting: Language Models for Code-Level Neural Architecture Search
Viaarxiv icon

Unified Functional Hashing in Automatic Machine Learning

Feb 10, 2023
Ryan Gillard, Stephen Jonany, Yingjie Miao, Michael Munn, Connal de Souza, Jonathan Dungay, Chen Liang, David R. So, Quoc V. Le, Esteban Real

Figure 1 for Unified Functional Hashing in Automatic Machine Learning
Figure 2 for Unified Functional Hashing in Automatic Machine Learning
Figure 3 for Unified Functional Hashing in Automatic Machine Learning
Figure 4 for Unified Functional Hashing in Automatic Machine Learning
Viaarxiv icon

Transcending Scaling Laws with 0.1% Extra Compute

Oct 20, 2022
Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q. Tran, David R. So, Siamak Shakeri, Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, Denny Zhou, Donald Metzler, Slav Petrov, Neil Houlsby, Quoc V. Le, Mostafa Dehghani

Figure 1 for Transcending Scaling Laws with 0.1% Extra Compute
Figure 2 for Transcending Scaling Laws with 0.1% Extra Compute
Figure 3 for Transcending Scaling Laws with 0.1% Extra Compute
Figure 4 for Transcending Scaling Laws with 0.1% Extra Compute
Viaarxiv icon

Primer: Searching for Efficient Transformers for Language Modeling

Sep 17, 2021
David R. So, Wojciech Mańke, Hanxiao Liu, Zihang Dai, Noam Shazeer, Quoc V. Le

Figure 1 for Primer: Searching for Efficient Transformers for Language Modeling
Figure 2 for Primer: Searching for Efficient Transformers for Language Modeling
Figure 3 for Primer: Searching for Efficient Transformers for Language Modeling
Figure 4 for Primer: Searching for Efficient Transformers for Language Modeling
Viaarxiv icon

Pay Attention to MLPs

Jun 01, 2021
Hanxiao Liu, Zihang Dai, David R. So, Quoc V. Le

Figure 1 for Pay Attention to MLPs
Figure 2 for Pay Attention to MLPs
Figure 3 for Pay Attention to MLPs
Figure 4 for Pay Attention to MLPs
Viaarxiv icon

MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records

Feb 03, 2021
Zhen Xu, David R. So, Andrew M. Dai

Figure 1 for MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records
Figure 2 for MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records
Figure 3 for MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records
Figure 4 for MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records
Viaarxiv icon

AutoML-Zero: Evolving Machine Learning Algorithms From Scratch

Mar 06, 2020
Esteban Real, Chen Liang, David R. So, Quoc V. Le

Figure 1 for AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
Figure 2 for AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
Figure 3 for AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
Figure 4 for AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
Viaarxiv icon

Towards a Human-like Open-Domain Chatbot

Feb 27, 2020
Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, Quoc V. Le

Figure 1 for Towards a Human-like Open-Domain Chatbot
Figure 2 for Towards a Human-like Open-Domain Chatbot
Figure 3 for Towards a Human-like Open-Domain Chatbot
Figure 4 for Towards a Human-like Open-Domain Chatbot
Viaarxiv icon