Alert button
Picture for Fangzhi Xu

Fangzhi Xu

Alert button

A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond

Add code
Bookmark button
Alert button
Mar 21, 2024
Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi Cheng, Chang Ma, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Qipeng Guo, Xipeng Qiu, Pengcheng Yin, Xiaoli Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu

Viaarxiv icon

A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction

Add code
Bookmark button
Alert button
Mar 12, 2024
Jian Zhang, Changlin Yang, Haiping Zhu, Qika Lin, Fangzhi Xu, Jun Liu

Figure 1 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Figure 2 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Figure 3 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Figure 4 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Viaarxiv icon

SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents

Add code
Bookmark button
Alert button
Jan 17, 2024
Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Yantao Li, Jianbing Zhang, Zhiyong Wu

Viaarxiv icon

Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models

Add code
Bookmark button
Alert button
Nov 15, 2023
Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Yu Qiao, Jun Liu

Viaarxiv icon

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views

Add code
Bookmark button
Alert button
Jun 16, 2023
Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, Erik Cambria

Figure 1 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 2 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 3 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 4 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Viaarxiv icon

Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text

Add code
Bookmark button
Alert button
Jan 08, 2023
Fangzhi Xu, Jun Liu, Qika Lin, Tianzhe Zhao, Jian Zhang, Lingling Zhang

Figure 1 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 2 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 3 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 4 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Viaarxiv icon

Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning

Add code
Bookmark button
Alert button
May 02, 2022
Fangzhi Xu, Qika Lin, Jun Liu, Yudai Pan, Lingling Zhang

Figure 1 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 2 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 3 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 4 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Viaarxiv icon

MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering

Add code
Bookmark button
Alert button
Dec 06, 2021
Fangzhi Xu, Qika Lin, Jun Liu, Lingling Zhang, Tianzhe Zhao, Qi Chai, Yudai Pan

Figure 1 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 2 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 3 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 4 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Viaarxiv icon