Picture for Fangzhi Xu

Fangzhi Xu

Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models

Add code
Jun 17, 2024
Figure 1 for Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
Figure 2 for Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
Figure 3 for Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
Figure 4 for Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
Viaarxiv icon

PathReasoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering

Add code
May 29, 2024
Viaarxiv icon

A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond

Add code
Mar 21, 2024
Viaarxiv icon

A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction

Add code
Mar 12, 2024
Figure 1 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Figure 2 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Figure 3 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Figure 4 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Viaarxiv icon

SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents

Add code
Jan 17, 2024
Viaarxiv icon

Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models

Add code
Nov 15, 2023
Figure 1 for Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
Figure 2 for Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
Figure 3 for Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
Figure 4 for Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
Viaarxiv icon

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views

Add code
Jun 16, 2023
Figure 1 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 2 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 3 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 4 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Viaarxiv icon

Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text

Add code
Jan 08, 2023
Figure 1 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 2 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 3 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 4 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Viaarxiv icon

Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning

Add code
May 02, 2022
Figure 1 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 2 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 3 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 4 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Viaarxiv icon

MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering

Add code
Dec 06, 2021
Figure 1 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 2 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 3 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 4 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Viaarxiv icon