Alert button
Picture for Ruiqi Zhong

Ruiqi Zhong

Alert button

UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models

Add code
Bookmark button
Alert button
Jan 20, 2022
Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu

Figure 1 for UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
Figure 2 for UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
Figure 3 for UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
Figure 4 for UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
Viaarxiv icon

The Effect of Model Size on Worst-Group Generalization

Add code
Bookmark button
Alert button
Dec 08, 2021
Alan Pham, Eunice Chan, Vikranth Srivatsa, Dhruba Ghosh, Yaoqing Yang, Yaodong Yu, Ruiqi Zhong, Joseph E. Gonzalez, Jacob Steinhardt

Figure 1 for The Effect of Model Size on Worst-Group Generalization
Figure 2 for The Effect of Model Size on Worst-Group Generalization
Figure 3 for The Effect of Model Size on Worst-Group Generalization
Figure 4 for The Effect of Model Size on Worst-Group Generalization
Viaarxiv icon

Meta-learning via Language Model In-context Tuning

Add code
Bookmark button
Alert button
Oct 15, 2021
Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, He He

Figure 1 for Meta-learning via Language Model In-context Tuning
Figure 2 for Meta-learning via Language Model In-context Tuning
Figure 3 for Meta-learning via Language Model In-context Tuning
Figure 4 for Meta-learning via Language Model In-context Tuning
Viaarxiv icon

Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level

Add code
Bookmark button
Alert button
May 13, 2021
Ruiqi Zhong, Dhruba Ghosh, Dan Klein, Jacob Steinhardt

Figure 1 for Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level
Figure 2 for Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level
Figure 3 for Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level
Figure 4 for Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level
Viaarxiv icon

Meta-tuning Language Models to Answer Prompts Better

Add code
Bookmark button
Alert button
Apr 17, 2021
Ruiqi Zhong, Kristy Lee, Zheng Zhang, Dan Klein

Figure 1 for Meta-tuning Language Models to Answer Prompts Better
Figure 2 for Meta-tuning Language Models to Answer Prompts Better
Figure 3 for Meta-tuning Language Models to Answer Prompts Better
Figure 4 for Meta-tuning Language Models to Answer Prompts Better
Viaarxiv icon

Approximating How Single Head Attention Learns

Add code
Bookmark button
Alert button
Mar 13, 2021
Charlie Snell, Ruiqi Zhong, Dan Klein, Jacob Steinhardt

Figure 1 for Approximating How Single Head Attention Learns
Figure 2 for Approximating How Single Head Attention Learns
Figure 3 for Approximating How Single Head Attention Learns
Figure 4 for Approximating How Single Head Attention Learns
Viaarxiv icon

Semantic Evaluation for Text-to-SQL with Distilled Test Suites

Add code
Bookmark button
Alert button
Oct 06, 2020
Ruiqi Zhong, Tao Yu, Dan Klein

Figure 1 for Semantic Evaluation for Text-to-SQL with Distilled Test Suites
Figure 2 for Semantic Evaluation for Text-to-SQL with Distilled Test Suites
Figure 3 for Semantic Evaluation for Text-to-SQL with Distilled Test Suites
Figure 4 for Semantic Evaluation for Text-to-SQL with Distilled Test Suites
Viaarxiv icon

Semantic Scaffolds for Pseudocode-to-Code Generation

Add code
Bookmark button
Alert button
May 12, 2020
Ruiqi Zhong, Mitchell Stern, Dan Klein

Figure 1 for Semantic Scaffolds for Pseudocode-to-Code Generation
Figure 2 for Semantic Scaffolds for Pseudocode-to-Code Generation
Figure 3 for Semantic Scaffolds for Pseudocode-to-Code Generation
Figure 4 for Semantic Scaffolds for Pseudocode-to-Code Generation
Viaarxiv icon