Alert button
Picture for Ruidan He

Ruidan He

Alert button

IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument Mining Tasks

Add code
Bookmark button
Alert button
Mar 24, 2022
Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, Luo Si

Figure 1 for IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument Mining Tasks
Figure 2 for IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument Mining Tasks
Figure 3 for IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument Mining Tasks
Figure 4 for IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument Mining Tasks
Viaarxiv icon

Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation

Add code
Bookmark button
Alert button
Mar 21, 2022
Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng

Figure 1 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 2 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 3 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Figure 4 for Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Viaarxiv icon

Knowledge Based Multilingual Language Model

Add code
Bookmark button
Alert button
Nov 22, 2021
Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq Joty, Luo Si

Figure 1 for Knowledge Based Multilingual Language Model
Figure 2 for Knowledge Based Multilingual Language Model
Figure 3 for Knowledge Based Multilingual Language Model
Figure 4 for Knowledge Based Multilingual Language Model
Viaarxiv icon

MELM: Data Augmentation with Masked Entity Language Modeling for Cross-lingual NER

Add code
Bookmark button
Alert button
Aug 31, 2021
Ran Zhou, Ruidan He, Xin Li, Lidong Bing, Erik Cambria, Luo Si, Chunyan Miao

Figure 1 for MELM: Data Augmentation with Masked Entity Language Modeling for Cross-lingual NER
Figure 2 for MELM: Data Augmentation with Masked Entity Language Modeling for Cross-lingual NER
Figure 3 for MELM: Data Augmentation with Masked Entity Language Modeling for Cross-lingual NER
Figure 4 for MELM: Data Augmentation with Masked Entity Language Modeling for Cross-lingual NER
Viaarxiv icon

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

Add code
Bookmark button
Alert button
Jun 06, 2021
Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si

Figure 1 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 2 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 3 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Figure 4 for On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation
Viaarxiv icon

Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model

Add code
Bookmark button
Alert button
Nov 23, 2020
Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, Rui Yan

Figure 1 for Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model
Figure 2 for Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model
Figure 3 for Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model
Figure 4 for Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model
Viaarxiv icon

Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training

Add code
Bookmark button
Alert button
Oct 06, 2020
Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing

Figure 1 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 2 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 3 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Figure 4 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training
Viaarxiv icon

An Unsupervised Sentence Embedding Method byMutual Information Maximization

Add code
Bookmark button
Alert button
Sep 25, 2020
Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, Lidong Bing

Figure 1 for An Unsupervised Sentence Embedding Method byMutual Information Maximization
Figure 2 for An Unsupervised Sentence Embedding Method byMutual Information Maximization
Figure 3 for An Unsupervised Sentence Embedding Method byMutual Information Maximization
Figure 4 for An Unsupervised Sentence Embedding Method byMutual Information Maximization
Viaarxiv icon

Feature Adaptation of Pre-Trained Language Models across Languages and Domains for Text Classification

Add code
Bookmark button
Alert button
Sep 24, 2020
Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing

Figure 1 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains for Text Classification
Figure 2 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains for Text Classification
Figure 3 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains for Text Classification
Figure 4 for Feature Adaptation of Pre-Trained Language Models across Languages and Domains for Text Classification
Viaarxiv icon

An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis

Add code
Bookmark button
Alert button
Jun 17, 2019
Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier

Figure 1 for An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis
Figure 2 for An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis
Figure 3 for An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis
Figure 4 for An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis
Viaarxiv icon