Picture for Qingyu Tan

Qingyu Tan

SeaLLMs -- Large Language Models for Southeast Asia

Add code
Dec 01, 2023
Viaarxiv icon

Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning

Add code
Nov 16, 2023
Viaarxiv icon

Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models

Add code
Jun 27, 2023
Viaarxiv icon

Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data

Add code
Jun 16, 2023
Viaarxiv icon

Unlocking Temporal Question Answering for Large Language Models Using Code Execution

Add code
May 24, 2023
Viaarxiv icon

Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction

Add code
May 25, 2022
Figure 1 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Figure 2 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Figure 3 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Figure 4 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Viaarxiv icon

Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation

Add code
Mar 21, 2022
Figure 1 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 2 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 3 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 4 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Viaarxiv icon

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

Add code
Jun 06, 2021
Figure 1 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 2 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 3 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 4 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Viaarxiv icon

Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training

Add code
Oct 06, 2020
Figure 1 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 2 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 3 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 4 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Viaarxiv icon