Picture for Qingyu Tan

Qingyu Tan

SeaLLMs -- Large Language Models for Southeast Asia

Add code
Dec 01, 2023
Figure 1 for SeaLLMs -- Large Language Models for Southeast Asia
Figure 2 for SeaLLMs -- Large Language Models for Southeast Asia
Figure 3 for SeaLLMs -- Large Language Models for Southeast Asia
Figure 4 for SeaLLMs -- Large Language Models for Southeast Asia
Viaarxiv icon

Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning

Add code
Nov 16, 2023
Figure 1 for Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning
Figure 2 for Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning
Figure 3 for Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning
Figure 4 for Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning
Viaarxiv icon

Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models

Add code
Jun 27, 2023
Figure 1 for Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
Figure 2 for Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
Figure 3 for Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
Figure 4 for Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
Viaarxiv icon

Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data

Add code
Jun 16, 2023
Figure 1 for Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data
Figure 2 for Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data
Figure 3 for Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data
Figure 4 for Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data
Viaarxiv icon

Unlocking Temporal Question Answering for Large Language Models Using Code Execution

Add code
May 24, 2023
Figure 1 for Unlocking Temporal Question Answering for Large Language Models Using Code Execution
Figure 2 for Unlocking Temporal Question Answering for Large Language Models Using Code Execution
Figure 3 for Unlocking Temporal Question Answering for Large Language Models Using Code Execution
Figure 4 for Unlocking Temporal Question Answering for Large Language Models Using Code Execution
Viaarxiv icon

Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction

Add code
May 25, 2022
Figure 1 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Figure 2 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Figure 3 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Figure 4 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Viaarxiv icon

Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation

Add code
Mar 21, 2022
Figure 1 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 2 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 3 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 4 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Viaarxiv icon

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

Add code
Jun 06, 2021
Figure 1 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 2 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 3 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 4 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Viaarxiv icon

Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training

Add code
Oct 06, 2020
Figure 1 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 2 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 3 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 4 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Viaarxiv icon