Alert button
Picture for Hai Zhao

Hai Zhao

Alert button

Department of Computer Science and Engineering, Shanghai Jiao Tong University, Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University

Language Model Pre-training on True Negatives

Add code
Bookmark button
Alert button
Dec 01, 2022
Zhuosheng Zhang, Hai Zhao, Masao Utiyama, Eiichiro Sumita

Figure 1 for Language Model Pre-training on True Negatives
Figure 2 for Language Model Pre-training on True Negatives
Figure 3 for Language Model Pre-training on True Negatives
Figure 4 for Language Model Pre-training on True Negatives
Viaarxiv icon

Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning

Add code
Bookmark button
Alert button
Oct 19, 2022
Hongqiu Wu, Ruixue Ding, Hai Zhao, Boli Chen, Pengjun Xie, Fei Huang, Min Zhang

Figure 1 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 2 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 3 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 4 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Viaarxiv icon

Sentence Representation Learning with Generative Objective rather than Contrastive Objective

Add code
Bookmark button
Alert button
Oct 16, 2022
Bohong Wu, Hai Zhao

Figure 1 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Figure 2 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Figure 3 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Figure 4 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Viaarxiv icon

Towards End-to-End Open Conversational Machine Reading

Add code
Bookmark button
Alert button
Oct 13, 2022
Sizhe Zhou, Siru Ouyang, Zhuosheng Zhang, Hai Zhao

Figure 1 for Towards End-to-End Open Conversational Machine Reading
Figure 2 for Towards End-to-End Open Conversational Machine Reading
Figure 3 for Towards End-to-End Open Conversational Machine Reading
Figure 4 for Towards End-to-End Open Conversational Machine Reading
Viaarxiv icon

Task Compass: Scaling Multi-task Pre-training with Task Prefix

Add code
Bookmark button
Alert button
Oct 12, 2022
Zhuosheng Zhang, Shuohang Wang, Yichong Xu, Yuwei Fang, Wenhao Yu, Yang Liu, Hai Zhao, Chenguang Zhu, Michael Zeng

Figure 1 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 2 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 3 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 4 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Viaarxiv icon

Instance Regularization for Discriminative Language Model Pre-training

Add code
Bookmark button
Alert button
Oct 11, 2022
Zhuosheng Zhang, Hai Zhao, Ming Zhou

Figure 1 for Instance Regularization for Discriminative Language Model Pre-training
Figure 2 for Instance Regularization for Discriminative Language Model Pre-training
Figure 3 for Instance Regularization for Discriminative Language Model Pre-training
Figure 4 for Instance Regularization for Discriminative Language Model Pre-training
Viaarxiv icon

Semantic-Preserving Adversarial Code Comprehension

Add code
Bookmark button
Alert button
Sep 12, 2022
Yiyang Li, Hongqiu Wu, Hai Zhao

Figure 1 for Semantic-Preserving Adversarial Code Comprehension
Figure 2 for Semantic-Preserving Adversarial Code Comprehension
Figure 3 for Semantic-Preserving Adversarial Code Comprehension
Figure 4 for Semantic-Preserving Adversarial Code Comprehension
Viaarxiv icon

Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning

Add code
Bookmark button
Alert button
Aug 23, 2022
Letian Peng, Zuchao Li, Hai Zhao

Figure 1 for Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning
Figure 2 for Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning
Figure 3 for Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning
Figure 4 for Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning
Viaarxiv icon

Learning Better Masking for Better Language Model Pre-training

Add code
Bookmark button
Alert button
Aug 23, 2022
Dongjie Yang, Zhuosheng Zhang, Hai Zhao

Figure 1 for Learning Better Masking for Better Language Model Pre-training
Figure 2 for Learning Better Masking for Better Language Model Pre-training
Figure 3 for Learning Better Masking for Better Language Model Pre-training
Figure 4 for Learning Better Masking for Better Language Model Pre-training
Viaarxiv icon

Adversarial Self-Attention for Language Understanding

Add code
Bookmark button
Alert button
Jun 25, 2022
Hongqiu Wu, Hai Zhao

Figure 1 for Adversarial Self-Attention for Language Understanding
Figure 2 for Adversarial Self-Attention for Language Understanding
Figure 3 for Adversarial Self-Attention for Language Understanding
Figure 4 for Adversarial Self-Attention for Language Understanding
Viaarxiv icon