Alert button
Picture for Chi-Min Chan

Chi-Min Chan

Alert button

RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation

Add code
Bookmark button
Alert button
Mar 31, 2024
Chi-Min Chan, Chunpu Xu, Ruibin Yuan, Hongyin Luo, Wei Xue, Yike Guo, Jie Fu

Viaarxiv icon

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents

Add code
Bookmark button
Alert button
Aug 21, 2023
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia Qin, Yaxi Lu, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou

Figure 1 for AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents
Figure 2 for AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents
Figure 3 for AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents
Figure 4 for AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents
Viaarxiv icon

ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate

Add code
Bookmark button
Alert button
Aug 14, 2023
Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu

Figure 1 for ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate
Figure 2 for ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate
Figure 3 for ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate
Figure 4 for ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate
Viaarxiv icon

Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models

Add code
Bookmark button
Alert button
Jun 04, 2023
Yusheng Su, Chi-Min Chan, Jiali Cheng, Yujia Qin, Yankai Lin, Shengding Hu, Zonghan Yang, Ning Ding, Zhiyuan Liu, Maosong Sun

Figure 1 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Figure 2 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Figure 3 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Figure 4 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Viaarxiv icon

Plug-and-Play Document Modules for Pre-trained Models

Add code
Bookmark button
Alert button
May 28, 2023
Chaojun Xiao, Zhengyan Zhang, Xu Han, Chi-Min Chan, Yankai Lin, Zhiyuan Liu, Xiangyang Li, Zhonghua Li, Zhao Cao, Maosong Sun

Figure 1 for Plug-and-Play Document Modules for Pre-trained Models
Figure 2 for Plug-and-Play Document Modules for Pre-trained Models
Figure 3 for Plug-and-Play Document Modules for Pre-trained Models
Figure 4 for Plug-and-Play Document Modules for Pre-trained Models
Viaarxiv icon

Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models

Add code
Bookmark button
Alert button
Mar 15, 2022
Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, Jing Yi, Weilin Zhao, Xiaozhi Wang, Zhiyuan Liu, Hai-Tao Zheng, Jianfei Chen, Yang Liu, Jie Tang, Juanzi Li, Maosong Sun

Figure 1 for Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Figure 2 for Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Figure 3 for Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Figure 4 for Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Viaarxiv icon

On Transferability of Prompt Tuning for Natural Language Understanding

Add code
Bookmark button
Alert button
Nov 12, 2021
Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, Jie Zhou

Figure 1 for On Transferability of Prompt Tuning for Natural Language Understanding
Figure 2 for On Transferability of Prompt Tuning for Natural Language Understanding
Figure 3 for On Transferability of Prompt Tuning for Natural Language Understanding
Figure 4 for On Transferability of Prompt Tuning for Natural Language Understanding
Viaarxiv icon