Alert button
Picture for Sharan Narang

Sharan Narang

Alert button

PaLM: Scaling Language Modeling with Pathways

Add code
Bookmark button
Alert button
Apr 19, 2022
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel

Figure 1 for PaLM: Scaling Language Modeling with Pathways
Figure 2 for PaLM: Scaling Language Modeling with Pathways
Figure 3 for PaLM: Scaling Language Modeling with Pathways
Figure 4 for PaLM: Scaling Language Modeling with Pathways
Viaarxiv icon

Self-Consistency Improves Chain of Thought Reasoning in Language Models

Add code
Bookmark button
Alert button
Apr 06, 2022
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou

Figure 1 for Self-Consistency Improves Chain of Thought Reasoning in Language Models
Figure 2 for Self-Consistency Improves Chain of Thought Reasoning in Language Models
Figure 3 for Self-Consistency Improves Chain of Thought Reasoning in Language Models
Figure 4 for Self-Consistency Improves Chain of Thought Reasoning in Language Models
Viaarxiv icon

Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$

Add code
Bookmark button
Alert button
Mar 31, 2022
Adam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sharan Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, Curtis Hawthorne, Aitor Lewkowycz, Alex Salcianu, Marc van Zee, Jacob Austin, Sebastian Goodman, Livio Baldini Soares, Haitang Hu, Sasha Tsvyashchenko, Aakanksha Chowdhery, Jasmijn Bastings, Jannis Bulian, Xavier Garcia, Jianmo Ni, Andrew Chen, Kathleen Kenealy, Jonathan H. Clark, Stephan Lee, Dan Garrette, James Lee-Thorp, Colin Raffel, Noam Shazeer, Marvin Ritter, Maarten Bosma, Alexandre Passos, Jeremy Maitin-Shepard, Noah Fiedel, Mark Omernick, Brennan Saeta, Ryan Sepassi, Alexander Spiridonov, Joshua Newlan, Andrea Gesmundo

Figure 1 for Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$
Figure 2 for Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$
Viaarxiv icon

Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers

Add code
Bookmark button
Alert button
Sep 22, 2021
Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler

Figure 1 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 2 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 3 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 4 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Viaarxiv icon

ByT5: Towards a token-free future with pre-trained byte-to-byte models

Add code
Bookmark button
Alert button
May 28, 2021
Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel

Figure 1 for ByT5: Towards a token-free future with pre-trained byte-to-byte models
Figure 2 for ByT5: Towards a token-free future with pre-trained byte-to-byte models
Figure 3 for ByT5: Towards a token-free future with pre-trained byte-to-byte models
Figure 4 for ByT5: Towards a token-free future with pre-trained byte-to-byte models
Viaarxiv icon

Do Transformer Modifications Transfer Across Implementations and Applications?

Add code
Bookmark button
Alert button
Feb 23, 2021
Sharan Narang, Hyung Won Chung, Yi Tay, William Fedus, Thibault Fevry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, Yanqi Zhou, Wei Li, Nan Ding, Jake Marcus, Adam Roberts, Colin Raffel

Figure 1 for Do Transformer Modifications Transfer Across Implementations and Applications?
Figure 2 for Do Transformer Modifications Transfer Across Implementations and Applications?
Figure 3 for Do Transformer Modifications Transfer Across Implementations and Applications?
Viaarxiv icon

On Task-Level Dialogue Composition of Generative Transformer Model

Add code
Bookmark button
Alert button
Oct 09, 2020
Prasanna Parthasarathi, Arvind Neelakantan, Sharan Narang

Figure 1 for On Task-Level Dialogue Composition of Generative Transformer Model
Figure 2 for On Task-Level Dialogue Composition of Generative Transformer Model
Figure 3 for On Task-Level Dialogue Composition of Generative Transformer Model
Figure 4 for On Task-Level Dialogue Composition of Generative Transformer Model
Viaarxiv icon

WT5?! Training Text-to-Text Models to Explain their Predictions

Add code
Bookmark button
Alert button
Apr 30, 2020
Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, Karishma Malkan

Figure 1 for WT5?! Training Text-to-Text Models to Explain their Predictions
Figure 2 for WT5?! Training Text-to-Text Models to Explain their Predictions
Figure 3 for WT5?! Training Text-to-Text Models to Explain their Predictions
Figure 4 for WT5?! Training Text-to-Text Models to Explain their Predictions
Viaarxiv icon

Neural Assistant: Joint Action Prediction, Response Generation, and Latent Knowledge Reasoning

Add code
Bookmark button
Alert button
Oct 31, 2019
Arvind Neelakantan, Semih Yavuz, Sharan Narang, Vishaal Prasad, Ben Goodrich, Daniel Duckworth, Chinnadhurai Sankar, Xifeng Yan

Figure 1 for Neural Assistant: Joint Action Prediction, Response Generation, and Latent Knowledge Reasoning
Figure 2 for Neural Assistant: Joint Action Prediction, Response Generation, and Latent Knowledge Reasoning
Figure 3 for Neural Assistant: Joint Action Prediction, Response Generation, and Latent Knowledge Reasoning
Figure 4 for Neural Assistant: Joint Action Prediction, Response Generation, and Latent Knowledge Reasoning
Viaarxiv icon