Alert button
Picture for Yuhui Zhang

Yuhui Zhang

Alert button

Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data

Jan 16, 2024
Yuhui Zhang, Elaine Sui, Serena Yeung-Levy

Viaarxiv icon

Describing Differences in Image Sets with Natural Language

Dec 05, 2023
Lisa Dunlap, Yuhui Zhang, Xiaohan Wang, Ruiqi Zhong, Trevor Darrell, Jacob Steinhardt, Joseph E. Gonzalez, Serena Yeung-Levy

Viaarxiv icon

Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation

Nov 27, 2023
Yuhui Zhang, Brandon McKinzie, Zhe Gan, Vaishaal Shankar, Alexander Toshev

Figure 1 for Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation
Figure 2 for Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation
Figure 3 for Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation
Figure 4 for Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation
Viaarxiv icon

MoCa: Measuring Human-Language Model Alignment on Causal and Moral Judgment Tasks

Oct 31, 2023
Allen Nie, Yuhui Zhang, Atharva Amdekar, Chris Piech, Tatsunori Hashimoto, Tobias Gerstenberg

Viaarxiv icon

Can large language models provide useful feedback on research papers? A large-scale empirical analysis

Oct 03, 2023
Weixin Liang, Yuhui Zhang, Hancheng Cao, Binglu Wang, Daisy Ding, Xinyu Yang, Kailas Vodrahalli, Siyu He, Daniel Smith, Yian Yin, Daniel McFarland, James Zou

Figure 1 for Can large language models provide useful feedback on research papers? A large-scale empirical analysis
Figure 2 for Can large language models provide useful feedback on research papers? A large-scale empirical analysis
Figure 3 for Can large language models provide useful feedback on research papers? A large-scale empirical analysis
Figure 4 for Can large language models provide useful feedback on research papers? A large-scale empirical analysis
Viaarxiv icon

Inverse Scaling: When Bigger Isn't Better

Jun 15, 2023
Ian R. McKenzie, Alexander Lyzhov, Michael Pieler, Alicia Parrish, Aaron Mueller, Ameya Prabhu, Euan McLean, Aaron Kirtland, Alexis Ross, Alisa Liu, Andrew Gritsevskiy, Daniel Wurgaft, Derik Kauffman, Gabriel Recchia, Jiacheng Liu, Joe Cavanagh, Max Weiss, Sicong Huang, The Floating Droid, Tom Tseng, Tomasz Korbak, Xudong Shen, Yuhui Zhang, Zhengping Zhou, Najoung Kim, Samuel R. Bowman, Ethan Perez

Figure 1 for Inverse Scaling: When Bigger Isn't Better
Figure 2 for Inverse Scaling: When Bigger Isn't Better
Figure 3 for Inverse Scaling: When Bigger Isn't Better
Figure 4 for Inverse Scaling: When Bigger Isn't Better
Viaarxiv icon

FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs

Jun 08, 2023
Shanshan Han, Baturalp Buyukates, Zijian Hu, Han Jin, Weizhao Jin, Lichao Sun, Xiaoyang Wang, Chulin Xie, Kai Zhang, Qifan Zhang, Yuhui Zhang, Chaoyang He, Salman Avestimehr

Figure 1 for FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs
Figure 2 for FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs
Figure 3 for FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs
Figure 4 for FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs
Viaarxiv icon

Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models

May 27, 2023
Yuhui Zhang, Michihiro Yasunaga, Zhengping Zhou, Jeff Z. HaoChen, James Zou, Percy Liang, Serena Yeung

Figure 1 for Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models
Figure 2 for Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models
Figure 3 for Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models
Figure 4 for Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models
Viaarxiv icon

Denoising Cosine Similarity: A Theory-Driven Approach for Efficient Representation Learning

Apr 19, 2023
Takumi Nakagawa, Yutaro Sanada, Hiroki Waida, Yuhui Zhang, Yuichiro Wada, Kōsaku Takanashi, Tomonori Yamada, Takafumi Kanamori

Figure 1 for Denoising Cosine Similarity: A Theory-Driven Approach for Efficient Representation Learning
Figure 2 for Denoising Cosine Similarity: A Theory-Driven Approach for Efficient Representation Learning
Figure 3 for Denoising Cosine Similarity: A Theory-Driven Approach for Efficient Representation Learning
Figure 4 for Denoising Cosine Similarity: A Theory-Driven Approach for Efficient Representation Learning
Viaarxiv icon

Towards Understanding the Mechanism of Contrastive Learning via Similarity Structure: A Theoretical Analysis

Apr 01, 2023
Hiroki Waida, Yuichiro Wada, Léo andéol, Takumi Nakagawa, Yuhui Zhang, Takafumi Kanamori

Figure 1 for Towards Understanding the Mechanism of Contrastive Learning via Similarity Structure: A Theoretical Analysis
Figure 2 for Towards Understanding the Mechanism of Contrastive Learning via Similarity Structure: A Theoretical Analysis
Figure 3 for Towards Understanding the Mechanism of Contrastive Learning via Similarity Structure: A Theoretical Analysis
Figure 4 for Towards Understanding the Mechanism of Contrastive Learning via Similarity Structure: A Theoretical Analysis
Viaarxiv icon