Alert button
Picture for Fangyu Lei

Fangyu Lei

Alert button

Competition-Level Problems are Effective LLM Evaluators

Dec 05, 2023
Yiming Huang, Zhenghao Lin, Xiao Liu, Yeyun Gong, Shuai Lu, Fangyu Lei, Yaobo Liang, Yelong Shen, Chen Lin, Nan Duan, Weizhu Chen

Figure 1 for Competition-Level Problems are Effective LLM Evaluators
Figure 2 for Competition-Level Problems are Effective LLM Evaluators
Figure 3 for Competition-Level Problems are Effective LLM Evaluators
Figure 4 for Competition-Level Problems are Effective LLM Evaluators
Viaarxiv icon

Assessing Knowledge Editing in Language Models via Relation Perspective

Nov 15, 2023
Yifan Wei, Xiaoyan Yu, Huanhuan Ma, Fangyu Lei, Yixuan Weng, Ran Song, Kang Liu

Figure 1 for Assessing Knowledge Editing in Language Models via Relation Perspective
Figure 2 for Assessing Knowledge Editing in Language Models via Relation Perspective
Figure 3 for Assessing Knowledge Editing in Language Models via Relation Perspective
Figure 4 for Assessing Knowledge Editing in Language Models via Relation Perspective
Viaarxiv icon

S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models

Oct 23, 2023
Fangyu Lei, Qian Liu, Yiming Huang, Shizhu He, Jun Zhao, Kang Liu

Figure 1 for S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models
Figure 2 for S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models
Figure 3 for S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models
Figure 4 for S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models
Viaarxiv icon

TableQAKit: A Comprehensive and Practical Toolkit for Table-based Question Answering

Oct 23, 2023
Fangyu Lei, Tongxu Luo, Pengqi Yang, Weihao Liu, Hanwen Liu, Jiahe Lei, Yiming Huang, Yifan Wei, Shizhu He, Jun Zhao, Kang Liu

Figure 1 for TableQAKit: A Comprehensive and Practical Toolkit for Table-based Question Answering
Figure 2 for TableQAKit: A Comprehensive and Practical Toolkit for Table-based Question Answering
Figure 3 for TableQAKit: A Comprehensive and Practical Toolkit for Table-based Question Answering
Figure 4 for TableQAKit: A Comprehensive and Practical Toolkit for Table-based Question Answering
Viaarxiv icon

MenatQA: A New Dataset for Testing the Temporal Comprehension and Reasoning Abilities of Large Language Models

Oct 08, 2023
Yifan Wei, Yisong Su, Huanhuan Ma, Xiaoyan Yu, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, Kang Liu

Figure 1 for MenatQA: A New Dataset for Testing the Temporal Comprehension and Reasoning Abilities of Large Language Models
Figure 2 for MenatQA: A New Dataset for Testing the Temporal Comprehension and Reasoning Abilities of Large Language Models
Figure 3 for MenatQA: A New Dataset for Testing the Temporal Comprehension and Reasoning Abilities of Large Language Models
Figure 4 for MenatQA: A New Dataset for Testing the Temporal Comprehension and Reasoning Abilities of Large Language Models
Viaarxiv icon

HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering

Sep 22, 2023
Tongxu Luo, Fangyu Lei, Jiahe Lei, Weihao Liu, Shihu He, Jun Zhao, Kang Liu

Figure 1 for HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering
Figure 2 for HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering
Figure 3 for HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering
Figure 4 for HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering
Viaarxiv icon

MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering over Text, Tables and Images

Sep 09, 2023
Weihao Liu, Fangyu Lei, Tongxu Luo, Jiahe Lei, Shizhu He, Jun Zhao, Kang Liu

Figure 1 for MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering over Text, Tables and Images
Figure 2 for MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering over Text, Tables and Images
Figure 3 for MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering over Text, Tables and Images
Figure 4 for MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering over Text, Tables and Images
Viaarxiv icon

S$^3$HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid Question Answering

May 19, 2023
Fangyu Lei, Xiang Li, Yifan Wei, Shizhu He, Yiming Huang, Jun Zhao, Kang Liu

Figure 1 for S$^3$HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid Question Answering
Figure 2 for S$^3$HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid Question Answering
Figure 3 for S$^3$HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid Question Answering
Figure 4 for S$^3$HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid Question Answering
Viaarxiv icon

Multi-View Graph Representation Learning for Answering Hybrid Numerical Reasoning Question

May 05, 2023
Yifan Wei, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, Kang Liu

Figure 1 for Multi-View Graph Representation Learning for Answering Hybrid Numerical Reasoning Question
Figure 2 for Multi-View Graph Representation Learning for Answering Hybrid Numerical Reasoning Question
Figure 3 for Multi-View Graph Representation Learning for Answering Hybrid Numerical Reasoning Question
Figure 4 for Multi-View Graph Representation Learning for Answering Hybrid Numerical Reasoning Question
Viaarxiv icon

Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder

Sep 16, 2022
Fangyu Lei, Shizhu He, Xiang Li, Jun Zhao, Kang Liu

Figure 1 for Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder
Figure 2 for Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder
Figure 3 for Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder
Figure 4 for Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder
Viaarxiv icon