Picture for Yanyan Zhao

Yanyan Zhao

Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A Survey

Add code
Jun 12, 2024
Figure 1 for Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A Survey
Figure 2 for Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A Survey
Figure 3 for Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A Survey
Figure 4 for Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A Survey
Viaarxiv icon

RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models

Add code
Jun 04, 2024
Figure 1 for RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models
Figure 2 for RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models
Figure 3 for RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models
Figure 4 for RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models
Viaarxiv icon

Towards Comprehensive and Efficient Post Safety Alignment of Large Language Models via Safety Patching

Add code
May 22, 2024
Viaarxiv icon

How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider Transformer Models

Add code
Mar 04, 2024
Figure 1 for How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider Transformer Models
Figure 2 for How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider Transformer Models
Figure 3 for How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider Transformer Models
Figure 4 for How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider Transformer Models
Viaarxiv icon

Vanilla Transformers are Transfer Capability Teachers

Add code
Mar 04, 2024
Figure 1 for Vanilla Transformers are Transfer Capability Teachers
Figure 2 for Vanilla Transformers are Transfer Capability Teachers
Figure 3 for Vanilla Transformers are Transfer Capability Teachers
Figure 4 for Vanilla Transformers are Transfer Capability Teachers
Viaarxiv icon

Both Matter: Enhancing the Emotional Intelligence of Large Language Models without Compromising the General Intelligence

Add code
Feb 15, 2024
Figure 1 for Both Matter: Enhancing the Emotional Intelligence of Large Language Models without Compromising the General Intelligence
Figure 2 for Both Matter: Enhancing the Emotional Intelligence of Large Language Models without Compromising the General Intelligence
Figure 3 for Both Matter: Enhancing the Emotional Intelligence of Large Language Models without Compromising the General Intelligence
Figure 4 for Both Matter: Enhancing the Emotional Intelligence of Large Language Models without Compromising the General Intelligence
Viaarxiv icon

DAPT: A Dual Attention Framework for Parameter-Efficient Continual Learning of Large Language Models

Add code
Jan 16, 2024
Figure 1 for DAPT: A Dual Attention Framework for Parameter-Efficient Continual Learning of Large Language Models
Figure 2 for DAPT: A Dual Attention Framework for Parameter-Efficient Continual Learning of Large Language Models
Figure 3 for DAPT: A Dual Attention Framework for Parameter-Efficient Continual Learning of Large Language Models
Figure 4 for DAPT: A Dual Attention Framework for Parameter-Efficient Continual Learning of Large Language Models
Viaarxiv icon

An Early Evaluation of GPT-4V(ision)

Add code
Oct 25, 2023
Figure 1 for An Early Evaluation of GPT-4V(ision)
Figure 2 for An Early Evaluation of GPT-4V(ision)
Figure 3 for An Early Evaluation of GPT-4V(ision)
Figure 4 for An Early Evaluation of GPT-4V(ision)
Viaarxiv icon

UNIMO-3: Multi-granularity Interaction for Vision-Language Representation Learning

Add code
May 23, 2023
Figure 1 for UNIMO-3: Multi-granularity Interaction for Vision-Language Representation Learning
Figure 2 for UNIMO-3: Multi-granularity Interaction for Vision-Language Representation Learning
Figure 3 for UNIMO-3: Multi-granularity Interaction for Vision-Language Representation Learning
Figure 4 for UNIMO-3: Multi-granularity Interaction for Vision-Language Representation Learning
Viaarxiv icon

Improving Cross-Task Generalization with Step-by-Step Instructions

Add code
May 08, 2023
Figure 1 for Improving Cross-Task Generalization with Step-by-Step Instructions
Figure 2 for Improving Cross-Task Generalization with Step-by-Step Instructions
Figure 3 for Improving Cross-Task Generalization with Step-by-Step Instructions
Figure 4 for Improving Cross-Task Generalization with Step-by-Step Instructions
Viaarxiv icon