Alert button
Picture for Mengnan Du

Mengnan Du

Alert button

Mitigating Shortcuts in Language Models with Soft Label Encoding

Add code
Bookmark button
Alert button
Sep 17, 2023
Zirui He, Huiqi Deng, Haiyan Zhao, Ninghao Liu, Mengnan Du

Figure 1 for Mitigating Shortcuts in Language Models with Soft Label Encoding
Figure 2 for Mitigating Shortcuts in Language Models with Soft Label Encoding
Figure 3 for Mitigating Shortcuts in Language Models with Soft Label Encoding
Figure 4 for Mitigating Shortcuts in Language Models with Soft Label Encoding
Viaarxiv icon

Explainability for Large Language Models: A Survey

Add code
Bookmark button
Alert button
Sep 17, 2023
Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Mengnan Du

Figure 1 for Explainability for Large Language Models: A Survey
Figure 2 for Explainability for Large Language Models: A Survey
Figure 3 for Explainability for Large Language Models: A Survey
Figure 4 for Explainability for Large Language Models: A Survey
Viaarxiv icon

Adaptive Priority Reweighing for Generalizing Fairness Improvement

Add code
Bookmark button
Alert button
Sep 15, 2023
Zhihao Hu, Yiran Xu, Mengnan Du, Jindong Gu, Xinmei Tian, Fengxiang He

Figure 1 for Adaptive Priority Reweighing for Generalizing Fairness Improvement
Figure 2 for Adaptive Priority Reweighing for Generalizing Fairness Improvement
Figure 3 for Adaptive Priority Reweighing for Generalizing Fairness Improvement
Figure 4 for Adaptive Priority Reweighing for Generalizing Fairness Improvement
Viaarxiv icon

A Survey on Fairness in Large Language Models

Add code
Bookmark button
Alert button
Aug 20, 2023
Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang

Figure 1 for A Survey on Fairness in Large Language Models
Figure 2 for A Survey on Fairness in Large Language Models
Viaarxiv icon

XGBD: Explanation-Guided Graph Backdoor Detection

Add code
Bookmark button
Alert button
Aug 08, 2023
Zihan Guan, Mengnan Du, Ninghao Liu

Figure 1 for XGBD: Explanation-Guided Graph Backdoor Detection
Figure 2 for XGBD: Explanation-Guided Graph Backdoor Detection
Figure 3 for XGBD: Explanation-Guided Graph Backdoor Detection
Figure 4 for XGBD: Explanation-Guided Graph Backdoor Detection
Viaarxiv icon

DISPEL: Domain Generalization via Domain-Specific Liberating

Add code
Bookmark button
Alert button
Aug 01, 2023
Chia-Yuan Chang, Yu-Neng Chuang, Guanchu Wang, Mengnan Du, Na Zou

Figure 1 for DISPEL: Domain Generalization via Domain-Specific Liberating
Figure 2 for DISPEL: Domain Generalization via Domain-Specific Liberating
Figure 3 for DISPEL: Domain Generalization via Domain-Specific Liberating
Figure 4 for DISPEL: Domain Generalization via Domain-Specific Liberating
Viaarxiv icon

Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases

Add code
Bookmark button
Alert button
Jul 04, 2023
Yingji Li, Mengnan Du, Xin Wang, Ying Wang

Figure 1 for Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
Figure 2 for Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
Figure 3 for Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
Figure 4 for Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
Viaarxiv icon

FAIRER: Fairness as Decision Rationale Alignment

Add code
Bookmark button
Alert button
Jun 27, 2023
Tianlin Li, Qing Guo, Aishan Liu, Mengnan Du, Zhiming Li, Yang Liu

Figure 1 for FAIRER: Fairness as Decision Rationale Alignment
Figure 2 for FAIRER: Fairness as Decision Rationale Alignment
Figure 3 for FAIRER: Fairness as Decision Rationale Alignment
Figure 4 for FAIRER: Fairness as Decision Rationale Alignment
Viaarxiv icon