Picture for Hai Zhao

Hai Zhao

Department of Computer Science and Engineering, Shanghai Jiao Tong University, Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University

Toward Adversarial Training on Contextualized Language Representation

Add code
May 08, 2023
Figure 1 for Toward Adversarial Training on Contextualized Language Representation
Figure 2 for Toward Adversarial Training on Contextualized Language Representation
Figure 3 for Toward Adversarial Training on Contextualized Language Representation
Figure 4 for Toward Adversarial Training on Contextualized Language Representation
Viaarxiv icon

Multimodal Chain-of-Thought Reasoning in Language Models

Add code
Feb 17, 2023
Figure 1 for Multimodal Chain-of-Thought Reasoning in Language Models
Figure 2 for Multimodal Chain-of-Thought Reasoning in Language Models
Figure 3 for Multimodal Chain-of-Thought Reasoning in Language Models
Figure 4 for Multimodal Chain-of-Thought Reasoning in Language Models
Viaarxiv icon

Channel-aware Decoupling Network for Multi-turn Dialogue Comprehension

Add code
Jan 11, 2023
Viaarxiv icon

Universal Multimodal Representation for Language Understanding

Add code
Jan 09, 2023
Viaarxiv icon

Self-Prompting Large Language Models for Open-Domain QA

Add code
Dec 16, 2022
Figure 1 for Self-Prompting Large Language Models for Open-Domain QA
Figure 2 for Self-Prompting Large Language Models for Open-Domain QA
Figure 3 for Self-Prompting Large Language Models for Open-Domain QA
Figure 4 for Self-Prompting Large Language Models for Open-Domain QA
Viaarxiv icon

Language Model Pre-training on True Negatives

Add code
Dec 01, 2022
Viaarxiv icon

Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning

Add code
Oct 19, 2022
Figure 1 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 2 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 3 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 4 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Viaarxiv icon

Sentence Representation Learning with Generative Objective rather than Contrastive Objective

Add code
Oct 16, 2022
Figure 1 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Figure 2 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Figure 3 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Figure 4 for Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Viaarxiv icon

Towards End-to-End Open Conversational Machine Reading

Add code
Oct 13, 2022
Figure 1 for Towards End-to-End Open Conversational Machine Reading
Figure 2 for Towards End-to-End Open Conversational Machine Reading
Figure 3 for Towards End-to-End Open Conversational Machine Reading
Figure 4 for Towards End-to-End Open Conversational Machine Reading
Viaarxiv icon

Task Compass: Scaling Multi-task Pre-training with Task Prefix

Add code
Oct 12, 2022
Figure 1 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 2 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 3 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 4 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Viaarxiv icon