Picture for Tianzhe Zhao

Tianzhe Zhao

PathReasoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering

Add code
May 29, 2024
Viaarxiv icon

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views

Add code
Jun 16, 2023
Figure 1 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 2 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 3 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Figure 4 for Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views
Viaarxiv icon

Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text

Add code
Jan 08, 2023
Figure 1 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 2 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 3 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Figure 4 for Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text
Viaarxiv icon

MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering

Add code
Dec 06, 2021
Figure 1 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 2 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 3 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Figure 4 for MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided Multimodal Attention for Textbook Question Answering
Viaarxiv icon

Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning

Add code
Oct 17, 2021
Figure 1 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Figure 2 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Figure 3 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Figure 4 for Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning
Viaarxiv icon