Alert button
Picture for Qika Lin

Qika Lin

Alert button

Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models

Nov 15, 2023
Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Yu Qiao, Jun Liu

Viaarxiv icon

A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics

Oct 09, 2023
Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria

Viaarxiv icon

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views

Jun 16, 2023
Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, Erik Cambria

Figure 1 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 2 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 3 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 4 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Viaarxiv icon

Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text

Jan 08, 2023
Fangzhi Xu, Jun Liu, Qika Lin, Tianzhe Zhao, Jian Zhang, Lingling Zhang

Figure 1 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 2 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 3 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 4 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Viaarxiv icon

Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning

May 02, 2022
Fangzhi Xu, Qika Lin, Jun Liu, Yudai Pan, Lingling Zhang

Figure 1 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 2 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 3 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Figure 4 for Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning
Viaarxiv icon

MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering

Dec 06, 2021
Fangzhi Xu, Qika Lin, Jun Liu, Lingling Zhang, Tianzhe Zhao, Qi Chai, Yudai Pan

Figure 1 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 2 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 3 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 4 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Viaarxiv icon

Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning

Oct 17, 2021
Yudai Pan, Jun Liu, Lingling Zhang, Xin Hu, Tianzhe Zhao, Qika Lin

Figure 1 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Figure 2 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Figure 3 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Figure 4 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Viaarxiv icon