Alert button
Picture for Hongzhan Chen

Hongzhan Chen

Alert button

RoleInteract: Evaluating the Social Interaction of Role-Playing Agents

Add code
Bookmark button
Alert button
Mar 22, 2024
Hongzhan Chen, Hehong Chen, Ming Yan, Wenshen Xu, Xing Gao, Weizhou Shen, Xiaojun Quan, Chenliang Li, Ji Zhang, Fei Huang, Jingren Zhou

Figure 1 for RoleInteract: Evaluating the Social Interaction of Role-Playing Agents
Figure 2 for RoleInteract: Evaluating the Social Interaction of Role-Playing Agents
Figure 3 for RoleInteract: Evaluating the Social Interaction of Role-Playing Agents
Figure 4 for RoleInteract: Evaluating the Social Interaction of Role-Playing Agents
Viaarxiv icon

Small LLMs Are Weak Tool Learners: A Multi-LLM Agent

Add code
Bookmark button
Alert button
Feb 01, 2024
Weizhou Shen, Chenliang Li, Hongzhan Chen, Ming Yan, Xiaojun Quan, Hehong Chen, Ji Zhang, Fei Huang

Viaarxiv icon

Knowledge Distillation for Closed-Source Language Models

Add code
Bookmark button
Alert button
Jan 13, 2024
Hongzhan Chen, Xiaojun Quan, Hehong Chen, Ming Yan, Ji Zhang

Viaarxiv icon

MCC-KD: Multi-CoT Consistent Knowledge Distillation

Add code
Bookmark button
Alert button
Oct 24, 2023
Hongzhan Chen, Siyue Wu, Xiaojun Quan, Rui Wang, Ming Yan, Ji Zhang

Figure 1 for MCC-KD: Multi-CoT Consistent Knowledge Distillation
Figure 2 for MCC-KD: Multi-CoT Consistent Knowledge Distillation
Figure 3 for MCC-KD: Multi-CoT Consistent Knowledge Distillation
Figure 4 for MCC-KD: Multi-CoT Consistent Knowledge Distillation
Viaarxiv icon

AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression

Add code
Bookmark button
Alert button
May 17, 2023
Siyue Wu, Hongzhan Chen, Xiaojun Quan, Qifan Wang, Rui Wang

Figure 1 for AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression
Figure 2 for AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression
Figure 3 for AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression
Figure 4 for AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression
Viaarxiv icon