Alert button
Picture for Lei Li

Lei Li

Alert button

Well-classified Examples are Underestimated in Classification with Deep Neural Networks

Oct 13, 2021
Guangxiang Zhao, Wenkai Yang, Xuancheng Ren, Lei Li, Xu Sun

Figure 1 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Figure 2 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Figure 3 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Figure 4 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks

The conventional wisdom behind learning deep classification models is to focus on bad-classified examples and ignore well-classified examples that are far from the decision boundary. For instance, when training with cross-entropy loss, examples with higher likelihoods (i.e., well-classified examples) contribute smaller gradients in back-propagation. However, we theoretically show that this common practice hinders representation learning, energy optimization, and the growth of margin. To counteract this deficiency, we propose to reward well-classified examples with additive bonuses to revive their contribution to learning. This counterexample theoretically addresses these three issues. We empirically support this claim by directly verify the theoretical results or through the significant performance improvement with our counterexample on diverse tasks, including image classification, graph classification, and machine translation. Furthermore, this paper shows that because our idea can solve these three issues, we can deal with complex scenarios, such as imbalanced classification, OOD detection, and applications under adversarial attacks.

Viaarxiv icon

LightSeq: Accelerated Training for Transformer-based Models on GPUs

Oct 12, 2021
Xiaohui Wang, Ying Xiong, Xian Qian, Yang Wei, Lei Li, Mingxuan Wang

Figure 1 for LightSeq: Accelerated Training for Transformer-based Models on GPUs
Figure 2 for LightSeq: Accelerated Training for Transformer-based Models on GPUs
Figure 3 for LightSeq: Accelerated Training for Transformer-based Models on GPUs
Figure 4 for LightSeq: Accelerated Training for Transformer-based Models on GPUs

Transformer-based models have proven to be powerful in many natural language, computer vision, and speech recognition applications. It is expensive to train these types of models due to unfixed input length, complex computation, and large numbers of parameters. Existing systems either only focus on efficient inference or optimize only BERT-like encoder models. In this paper, we present LightSeq, a system for efficient training of Transformer-based models on GPUs. We propose a series of GPU optimization techniques tailored to computation flow and memory access patterns of neural layers in Transformers. LightSeq supports a variety of network architectures, including BERT (encoder-only), GPT (decoder-only), and Transformer (encoder-decoder). Our experiments on GPUs with varying models and datasets show that LightSeq is 1.4-3.5x faster than previous systems. In particular, it gains 308% training speedup compared with existing systems on a large public machine translation benchmark (WMT14 English-German).

* 12 pages, 17 figures 
Viaarxiv icon

The Volctrans GLAT System: Non-autoregressive Translation Meets WMT21

Sep 24, 2021
Lihua Qian, Yi Zhou, Zaixiang Zheng, Yaoming Zhu, Zehui Lin, Jiangtao Feng, Shanbo Cheng, Lei Li, Mingxuan Wang, Hao Zhou

Figure 1 for The Volctrans GLAT System: Non-autoregressive Translation Meets WMT21
Figure 2 for The Volctrans GLAT System: Non-autoregressive Translation Meets WMT21
Figure 3 for The Volctrans GLAT System: Non-autoregressive Translation Meets WMT21
Figure 4 for The Volctrans GLAT System: Non-autoregressive Translation Meets WMT21

This paper describes the Volctrans' submission to the WMT21 news translation shared task for German->English translation. We build a parallel (i.e., non-autoregressive) translation system using the Glancing Transformer, which enables fast and accurate parallel decoding in contrast to the currently prevailing autoregressive models. To the best of our knowledge, this is the first parallel translation system that can be scaled to such a practical scenario like WMT competition. More importantly, our parallel translation system achieves the best BLEU score (35.0) on German->English translation task, outperforming all strong autoregressive counterparts.

* 10 pages, 5 figures, WMT2021 
Viaarxiv icon

Dynamic Knowledge Distillation for Pre-trained Language Models

Sep 23, 2021
Lei Li, Yankai Lin, Shuhuai Ren, Peng Li, Jie Zhou, Xu Sun

Figure 1 for Dynamic Knowledge Distillation for Pre-trained Language Models
Figure 2 for Dynamic Knowledge Distillation for Pre-trained Language Models
Figure 3 for Dynamic Knowledge Distillation for Pre-trained Language Models
Figure 4 for Dynamic Knowledge Distillation for Pre-trained Language Models

Knowledge distillation~(KD) has been proved effective for compressing large-scale pre-trained language models. However, existing methods conduct KD statically, e.g., the student model aligns its output distribution to that of a selected teacher model on the pre-defined training dataset. In this paper, we explore whether a dynamic knowledge distillation that empowers the student to adjust the learning procedure according to its competency, regarding the student performance and learning efficiency. We explore the dynamical adjustments on three aspects: teacher model adoption, data selection, and KD objective adaptation. Experimental results show that (1) proper selection of teacher model can boost the performance of student model; (2) conducting KD with 10% informative instances achieves comparable performance while greatly accelerates the training; (3) the student performance can be boosted by adjusting the supervision contribution of different alignment objective. We find dynamic knowledge distillation is promising and provide discussions on potential future directions towards more efficient KD methods. Our code is available at https://github.com/lancopku/DynamicKD.

* Main Conference EMNLP 2021, Camera Ready 
Viaarxiv icon

Learning Kernel-Smoothed Machine Translation with Retrieved Examples

Sep 21, 2021
Qingnan Jiang, Mingxuan Wang, Jun Cao, Shanbo Cheng, Shujian Huang, Lei Li

Figure 1 for Learning Kernel-Smoothed Machine Translation with Retrieved Examples
Figure 2 for Learning Kernel-Smoothed Machine Translation with Retrieved Examples
Figure 3 for Learning Kernel-Smoothed Machine Translation with Retrieved Examples
Figure 4 for Learning Kernel-Smoothed Machine Translation with Retrieved Examples

How to effectively adapt neural machine translation (NMT) models according to emerging cases without retraining? Despite the great success of neural machine translation, updating the deployed models online remains a challenge. Existing non-parametric approaches that retrieve similar examples from a database to guide the translation process are promising but are prone to overfit the retrieved examples. However, non-parametric methods are prone to overfit the retrieved examples. In this work, we propose to learn Kernel-Smoothed Translation with Example Retrieval (KSTER), an effective approach to adapt neural machine translation models online. Experiments on domain adaptation and multi-domain machine translation datasets show that even without expensive retraining, KSTER is able to achieve improvement of 1.1 to 1.5 BLEU scores over the best existing online adaptation methods. The code and trained models are released at https://github.com/jiangqn/KSTER.

* EMNLP 2021 
Viaarxiv icon

UniST: Unified End-to-end Model for Streaming and Non-streaming Speech Translation

Sep 15, 2021
Qianqian Dong, Yaoming Zhu, Mingxuan Wang, Lei Li

Figure 1 for UniST: Unified End-to-end Model for Streaming and Non-streaming Speech Translation
Figure 2 for UniST: Unified End-to-end Model for Streaming and Non-streaming Speech Translation
Figure 3 for UniST: Unified End-to-end Model for Streaming and Non-streaming Speech Translation
Figure 4 for UniST: Unified End-to-end Model for Streaming and Non-streaming Speech Translation

This paper presents a unified end-to-end frame-work for both streaming and non-streamingspeech translation. While the training recipes for non-streaming speech translation have been mature, the recipes for streaming speechtranslation are yet to be built. In this work, wefocus on developing a unified model (UniST) which supports streaming and non-streaming ST from the perspective of fundamental components, including training objective, attention mechanism and decoding policy. Experiments on the most popular speech-to-text translation benchmark dataset, MuST-C, show that UniST achieves significant improvement for non-streaming ST, and a better-learned trade-off for BLEU score and latency metrics for streaming ST, compared with end-to-end baselines and the cascaded models. We will make our codes and evaluation tools publicly available.

Viaarxiv icon

Multilingual Translation via Grafting Pre-trained Language Models

Sep 11, 2021
Zewei Sun, Mingxuan Wang, Lei Li

Figure 1 for Multilingual Translation via Grafting Pre-trained Language Models
Figure 2 for Multilingual Translation via Grafting Pre-trained Language Models
Figure 3 for Multilingual Translation via Grafting Pre-trained Language Models
Figure 4 for Multilingual Translation via Grafting Pre-trained Language Models

Can pre-trained BERT for one language and GPT for another be glued together to translate texts? Self-supervised training using only monolingual data has led to the success of pre-trained (masked) language models in many NLP tasks. However, directly connecting BERT as an encoder and GPT as a decoder can be challenging in machine translation, for GPT-like models lack a cross-attention component that is needed in seq2seq decoders. In this paper, we propose Graformer to graft separately pre-trained (masked) language models for machine translation. With monolingual data for pre-training and parallel data for grafting training, we maximally take advantage of the usage of both types of data. Experiments on 60 directions show that our method achieves average improvements of 5.8 BLEU in x2en and 2.9 BLEU in en2x directions comparing with the multilingual Transformer of the same size.

* Accepted in EMNLP 2021 (Findings) 
Viaarxiv icon

LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER

Sep 09, 2021
Xiang Chen, Ningyu Zhang, Lei Li, Xin Xie, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen

Figure 1 for LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER
Figure 2 for LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER
Figure 3 for LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER
Figure 4 for LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER

Most existing NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data. Recently, prompt-tuning methods for pre-trained language models have achieved remarkable performance in few-shot learning by exploiting prompts as task guidance to reduce the gap between training progress and downstream tuning. Inspired by prompt learning, we propose a novel lightweight generative framework with prompt-guided attention for low-resource NER (LightNER). Specifically, we construct the semantic-aware answer space of entity categories for prompt learning to generate the entity span sequence and entity categories without any label-specific classifiers. We further propose prompt-guided attention by incorporating continuous prompts into the self-attention layer to re-modulate the attention and adapt pre-trained weights. Note that we only tune those continuous prompts with the whole parameter of the pre-trained language model fixed, thus, making our approach lightweight and flexible for low-resource scenarios and can better transfer knowledge across domains. Experimental results show that LightNER can obtain comparable performance in the standard supervised setting and outperform strong baselines in low-resource settings by tuning only a small part of the parameters.

* Work in progress 
Viaarxiv icon

Right Ventricular Segmentation from Short- and Long-Axis MRIs via Information Transition

Sep 05, 2021
Lei Li, Wangbin Ding, Liqun Huang, Xiahai Zhuang

Figure 1 for Right Ventricular Segmentation from Short- and Long-Axis MRIs via Information Transition
Figure 2 for Right Ventricular Segmentation from Short- and Long-Axis MRIs via Information Transition
Figure 3 for Right Ventricular Segmentation from Short- and Long-Axis MRIs via Information Transition
Figure 4 for Right Ventricular Segmentation from Short- and Long-Axis MRIs via Information Transition

Right ventricular (RV) segmentation from magnetic resonance imaging (MRI) is a crucial step for cardiac morphology and function analysis. However, automatic RV segmentation from MRI is still challenging, mainly due to the heterogeneous intensity, the complex variable shapes, and the unclear RV boundary. Moreover, current methods for the RV segmentation tend to suffer from performance degradation at the basal and apical slices of MRI. In this work, we propose an automatic RV segmentation framework, where the information from long-axis (LA) views is utilized to assist the segmentation of short-axis (SA) views via information transition. Specifically, we employed the transformed segmentation from LA views as a prior information, to extract the ROI from SA views for better segmentation. The information transition aims to remove the surrounding ambiguous regions in the SA views. %, such as the tricuspid valve regions. We tested our model on a public dataset with 360 multi-center, multi-vendor and multi-disease subjects that consist of both LA and SA MRIs. Our experimental results show that including LA views can be effective to improve the accuracy of the SA segmentation. Our model is publicly available at https://github.com/NanYoMy/MMs-2.

* None 
Viaarxiv icon

Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification

Sep 01, 2021
Shuhuai Ren, Jinchao Zhang, Lei Li, Xu Sun, Jie Zhou

Figure 1 for Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification
Figure 2 for Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification
Figure 3 for Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification
Figure 4 for Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification

Data augmentation aims to enrich training samples for alleviating the overfitting issue in low-resource or class-imbalanced situations. Traditional methods first devise task-specific operations such as Synonym Substitute, then preset the corresponding parameters such as the substitution rate artificially, which require a lot of prior knowledge and are prone to fall into the sub-optimum. Besides, the number of editing operations is limited in the previous methods, which decreases the diversity of the augmented data and thus restricts the performance gain. To overcome the above limitations, we propose a framework named Text AutoAugment (TAA) to establish a compositional and learnable paradigm for data augmentation. We regard a combination of various operations as an augmentation policy and utilize an efficient Bayesian Optimization algorithm to automatically search for the best policy, which substantially improves the generalization capability of models. Experiments on six benchmark datasets show that TAA boosts classification accuracy in low-resource and class-imbalanced regimes by an average of 8.8% and 9.7%, respectively, outperforming strong baselines.

* Accepted by EMNLP 2021 main conference (Long Paper) 
Viaarxiv icon