Picture for Hai Zhao

Hai Zhao

Department of Computer Science and Engineering, Shanghai Jiao Tong University, Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University

Self-Prompting Large Language Models for Open-Domain QA

Add code
Dec 16, 2022
Figure 1 for Self-Prompting Large Language Models for Open-Domain QA
Figure 2 for Self-Prompting Large Language Models for Open-Domain QA
Figure 3 for Self-Prompting Large Language Models for Open-Domain QA
Figure 4 for Self-Prompting Large Language Models for Open-Domain QA
Viaarxiv icon

Language Model Pre-training on True Negatives

Add code
Dec 01, 2022
Figure 1 for Language Model Pre-training on True Negatives
Figure 2 for Language Model Pre-training on True Negatives
Figure 3 for Language Model Pre-training on True Negatives
Figure 4 for Language Model Pre-training on True Negatives
Viaarxiv icon

Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning

Add code
Oct 19, 2022
Figure 1 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 2 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 3 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 4 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Viaarxiv icon

Sentence Representation Learning with Generative Objective rather than Contrastive Objective

Add code
Oct 16, 2022
Figure 1 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Figure 2 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Figure 3 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Figure 4 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Viaarxiv icon

Towards End-to-End Open Conversational Machine Reading

Add code
Oct 13, 2022
Figure 1 for Towards End-to-End Open Conversational Machine Reading
Figure 2 for Towards End-to-End Open Conversational Machine Reading
Figure 3 for Towards End-to-End Open Conversational Machine Reading
Figure 4 for Towards End-to-End Open Conversational Machine Reading
Viaarxiv icon

Task Compass: Scaling Multi-task Pre-training with Task Prefix

Add code
Oct 12, 2022
Figure 1 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 2 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 3 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 4 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Viaarxiv icon

Instance Regularization for Discriminative Language Model Pre-training

Add code
Oct 11, 2022
Figure 1 for Instance Regularization for Discriminative Language Model Pre-training
Figure 2 for Instance Regularization for Discriminative Language Model Pre-training
Figure 3 for Instance Regularization for Discriminative Language Model Pre-training
Figure 4 for Instance Regularization for Discriminative Language Model Pre-training
Viaarxiv icon

Semantic-Preserving Adversarial Code Comprehension

Add code
Sep 12, 2022
Figure 1 for Semantic-Preserving Adversarial Code Comprehension
Figure 2 for Semantic-Preserving Adversarial Code Comprehension
Figure 3 for Semantic-Preserving Adversarial Code Comprehension
Figure 4 for Semantic-Preserving Adversarial Code Comprehension
Viaarxiv icon

Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning

Add code
Aug 23, 2022
Figure 1 for Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning
Figure 2 for Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning
Figure 3 for Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning
Figure 4 for Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning
Viaarxiv icon

Learning Better Masking for Better Language Model Pre-training

Add code
Aug 23, 2022
Figure 1 for Learning Better Masking for Better Language Model Pre-training
Figure 2 for Learning Better Masking for Better Language Model Pre-training
Figure 3 for Learning Better Masking for Better Language Model Pre-training
Figure 4 for Learning Better Masking for Better Language Model Pre-training
Viaarxiv icon