Picture for Qika Lin

Qika Lin

PathReasoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering

Add code
May 29, 2024
Viaarxiv icon

A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction

Add code
Mar 12, 2024
Figure 1 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Figure 2 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Figure 3 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Figure 4 for A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction
Viaarxiv icon

Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models

Add code
Nov 15, 2023
Figure 1 for Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
Figure 2 for Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
Figure 3 for Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
Figure 4 for Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
Viaarxiv icon

A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics

Add code
Oct 09, 2023
Figure 1 for A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics
Figure 2 for A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics
Figure 3 for A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics
Figure 4 for A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics
Viaarxiv icon

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views

Add code
Jun 16, 2023
Figure 1 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 2 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 3 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 4 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Viaarxiv icon

Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text

Add code
Jan 08, 2023
Figure 1 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 2 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 3 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 4 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Viaarxiv icon

Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning

Add code
May 02, 2022
Figure 1 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 2 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 3 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 4 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Viaarxiv icon

MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering

Add code
Dec 06, 2021
Figure 1 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 2 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 3 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 4 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Viaarxiv icon

Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning

Add code
Oct 17, 2021
Figure 1 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Figure 2 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Figure 3 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Figure 4 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Viaarxiv icon