Alert button
Picture for Tatsunori Hashimoto

Tatsunori Hashimoto

Alert button

Whose Opinions Do Language Models Reflect?

Add code
Bookmark button
Alert button
Mar 30, 2023
Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, Tatsunori Hashimoto

Figure 1 for Whose Opinions Do Language Models Reflect?
Figure 2 for Whose Opinions Do Language Models Reflect?
Figure 3 for Whose Opinions Do Language Models Reflect?
Figure 4 for Whose Opinions Do Language Models Reflect?
Viaarxiv icon

Foundation Models and Fair Use

Add code
Bookmark button
Alert button
Mar 28, 2023
Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A. Lemley, Percy Liang

Figure 1 for Foundation Models and Fair Use
Figure 2 for Foundation Models and Fair Use
Figure 3 for Foundation Models and Fair Use
Figure 4 for Foundation Models and Fair Use
Viaarxiv icon

Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models

Add code
Bookmark button
Alert button
Feb 26, 2023
Kaitlyn Zhou, Dan Jurafsky, Tatsunori Hashimoto

Figure 1 for Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models
Figure 2 for Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models
Figure 3 for Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models
Figure 4 for Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models
Viaarxiv icon

Out-of-Domain Robustness via Targeted Augmentations

Add code
Bookmark button
Alert button
Feb 23, 2023
Irena Gao, Shiori Sagawa, Pang Wei Koh, Tatsunori Hashimoto, Percy Liang

Figure 1 for Out-of-Domain Robustness via Targeted Augmentations
Figure 2 for Out-of-Domain Robustness via Targeted Augmentations
Figure 3 for Out-of-Domain Robustness via Targeted Augmentations
Figure 4 for Out-of-Domain Robustness via Targeted Augmentations
Viaarxiv icon

Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks

Add code
Bookmark button
Alert button
Feb 11, 2023
Daniel Kang, Xuechen Li, Ion Stoica, Carlos Guestrin, Matei Zaharia, Tatsunori Hashimoto

Figure 1 for Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
Figure 2 for Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
Figure 3 for Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
Figure 4 for Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
Viaarxiv icon

Evaluating Self-Supervised Learning via Risk Decomposition

Add code
Bookmark button
Alert button
Feb 06, 2023
Yann Dubois, Tatsunori Hashimoto, Percy Liang

Figure 1 for Evaluating Self-Supervised Learning via Risk Decomposition
Figure 2 for Evaluating Self-Supervised Learning via Risk Decomposition
Figure 3 for Evaluating Self-Supervised Learning via Risk Decomposition
Figure 4 for Evaluating Self-Supervised Learning via Risk Decomposition
Viaarxiv icon

Tracing and Removing Data Errors in Natural Language Generation Datasets

Add code
Bookmark button
Alert button
Dec 21, 2022
Faisal Ladhak, Esin Durmus, Tatsunori Hashimoto

Figure 1 for Tracing and Removing Data Errors in Natural Language Generation Datasets
Figure 2 for Tracing and Removing Data Errors in Natural Language Generation Datasets
Figure 3 for Tracing and Removing Data Errors in Natural Language Generation Datasets
Figure 4 for Tracing and Removing Data Errors in Natural Language Generation Datasets
Viaarxiv icon

Privacy-Preserving Domain Adaptation of Semantic Parsers

Add code
Bookmark button
Alert button
Dec 20, 2022
Fatemehsadat Mireshghallah, Richard Shin, Yu Su, Tatsunori Hashimoto, Jason Eisner

Figure 1 for Privacy-Preserving Domain Adaptation of Semantic Parsers
Figure 2 for Privacy-Preserving Domain Adaptation of Semantic Parsers
Figure 3 for Privacy-Preserving Domain Adaptation of Semantic Parsers
Figure 4 for Privacy-Preserving Domain Adaptation of Semantic Parsers
Viaarxiv icon

Holistic Evaluation of Language Models

Add code
Bookmark button
Alert button
Nov 16, 2022
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, Yuta Koreeda

Figure 1 for Holistic Evaluation of Language Models
Figure 2 for Holistic Evaluation of Language Models
Figure 3 for Holistic Evaluation of Language Models
Figure 4 for Holistic Evaluation of Language Models
Viaarxiv icon

Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale

Add code
Bookmark button
Alert button
Nov 07, 2022
Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan

Figure 1 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 2 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 3 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 4 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Viaarxiv icon