Alert button
Picture for Qingyu Tan

Qingyu Tan

Alert button

SeaLLMs -- Large Language Models for Southeast Asia

Add code
Bookmark button
Alert button
Dec 01, 2023
Xuan-Phi Nguyen, Wenxuan Zhang, Xin Li, Mahani Aljunied, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang, Chaoqun Liu, Hang Zhang, Lidong Bing

Figure 1 for SeaLLMs -- Large Language Models for Southeast Asia
Figure 2 for SeaLLMs -- Large Language Models for Southeast Asia
Figure 3 for SeaLLMs -- Large Language Models for Southeast Asia
Figure 4 for SeaLLMs -- Large Language Models for Southeast Asia
Viaarxiv icon

Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning

Add code
Bookmark button
Alert button
Nov 16, 2023
Qingyu Tan, Hwee Tou Ng, Lidong Bing

Viaarxiv icon

Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models

Add code
Bookmark button
Alert button
Jun 27, 2023
Qingyu Tan, Hwee Tou Ng, Lidong Bing

Figure 1 for Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
Figure 2 for Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
Figure 3 for Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
Figure 4 for Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
Viaarxiv icon

Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data

Add code
Bookmark button
Alert button
Jun 16, 2023
Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng

Figure 1 for Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data
Figure 2 for Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data
Figure 3 for Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data
Figure 4 for Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data
Viaarxiv icon

Unlocking Temporal Question Answering for Large Language Models Using Code Execution

Add code
Bookmark button
Alert button
May 24, 2023
Xingxuan Li, Liying Cheng, Qingyu Tan, Hwee Tou Ng, Shafiq Joty, Lidong Bing

Figure 1 for Unlocking Temporal Question Answering for Large Language Models Using Code Execution
Figure 2 for Unlocking Temporal Question Answering for Large Language Models Using Code Execution
Figure 3 for Unlocking Temporal Question Answering for Large Language Models Using Code Execution
Figure 4 for Unlocking Temporal Question Answering for Large Language Models Using Code Execution
Viaarxiv icon

Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction

Add code
Bookmark button
Alert button
May 25, 2022
Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng

Figure 1 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Figure 2 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Figure 3 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Figure 4 for Revisiting DocRED -- Addressing the Overlooked False Negative Problem in Relation Extraction
Viaarxiv icon

Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation

Add code
Bookmark button
Alert button
Mar 21, 2022
Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng

Figure 1 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 2 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 3 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 4 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Viaarxiv icon

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

Add code
Bookmark button
Alert button
Jun 06, 2021
Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si

Figure 1 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 2 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 3 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 4 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Viaarxiv icon

Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training

Add code
Bookmark button
Alert button
Oct 06, 2020
Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing

Figure 1 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 2 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 3 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 4 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Viaarxiv icon