Alert button
Picture for Hanjie Chen

Hanjie Chen

Alert button

Benchmarking Large Language Models on Answering and Explaining Challenging Medical Questions

Mar 13, 2024
Hanjie Chen, Zhouxiang Fang, Yash Singla, Mark Dredze

Viaarxiv icon

RORA: Robust Free-Text Rationale Evaluation

Mar 01, 2024
Zhengping Jiang, Yining Lu, Hanjie Chen, Daniel Khashabi, Benjamin Van Durme, Anqi Liu

Viaarxiv icon

Explainability for Large Language Models: A Survey

Sep 17, 2023
Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Mengnan Du

Figure 1 for Explainability for Large Language Models: A Survey
Figure 2 for Explainability for Large Language Models: A Survey
Figure 3 for Explainability for Large Language Models: A Survey
Figure 4 for Explainability for Large Language Models: A Survey
Viaarxiv icon

Improving Interpretability via Explicit Word Interaction Graph Layer

Feb 03, 2023
Arshdeep Sekhon, Hanjie Chen, Aman Shrivastava, Zhe Wang, Yangfeng Ji, Yanjun Qi

Figure 1 for Improving Interpretability via Explicit Word Interaction Graph Layer
Figure 2 for Improving Interpretability via Explicit Word Interaction Graph Layer
Figure 3 for Improving Interpretability via Explicit Word Interaction Graph Layer
Figure 4 for Improving Interpretability via Explicit Word Interaction Graph Layer
Viaarxiv icon

KNIFE: Knowledge Distillation with Free-Text Rationales

Dec 19, 2022
Aaron Chan, Zhiyuan Zeng, Wyatt Lake, Brihi Joshi, Hanjie Chen, Xiang Ren

Figure 1 for KNIFE: Knowledge Distillation with Free-Text Rationales
Figure 2 for KNIFE: Knowledge Distillation with Free-Text Rationales
Figure 3 for KNIFE: Knowledge Distillation with Free-Text Rationales
Figure 4 for KNIFE: Knowledge Distillation with Free-Text Rationales
Viaarxiv icon

Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification

Dec 10, 2022
Ruixuan Tang, Hanjie Chen, Yangfeng Ji

Figure 1 for Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Figure 2 for Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Figure 3 for Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Figure 4 for Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Viaarxiv icon

REV: Information-Theoretic Evaluation of Free-Text Rationales

Oct 10, 2022
Hanjie Chen, Faeze Brahman, Xiang Ren, Yangfeng Ji, Yejin Choi, Swabha Swayamdipta

Figure 1 for REV: Information-Theoretic Evaluation of Free-Text Rationales
Figure 2 for REV: Information-Theoretic Evaluation of Free-Text Rationales
Figure 3 for REV: Information-Theoretic Evaluation of Free-Text Rationales
Figure 4 for REV: Information-Theoretic Evaluation of Free-Text Rationales
Viaarxiv icon