Alert button
Picture for Haoyan Liu

Haoyan Liu

Alert button

TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities

Dec 13, 2022
Zhe Zhao, Yudong Li, Cheng Hou, Jing Zhao, Rong Tian, Weijie Liu, Yiren Chen, Ningyuan Sun, Haoyan Liu, Weiquan Mao, Han Guo, Weigang Guo, Taiqiang Wu, Tao Zhu, Wenhang Shi, Chen Chen, Shan Huang, Sihong Chen, Liqun Liu, Feifei Li, Xiaoshuai Chen, Xingwu Sun, Zhanhui Kang, Xiaoyong Du, Linlin Shen, Kimmo Yan

Figure 1 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Figure 2 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Figure 3 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Figure 4 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities

Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.

Viaarxiv icon

A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning

Oct 18, 2022
Kunbo Ding, Weijie Liu, Yuejian Fang, Weiquan Mao, Zhe Zhao, Tao Zhu, Haoyan Liu, Rong Tian, Yiren Chen

Figure 1 for A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning
Figure 2 for A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning
Figure 3 for A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning
Figure 4 for A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning

Existing zero-shot cross-lingual transfer methods rely on parallel corpora or bilingual dictionaries, which are expensive and impractical for low-resource languages. To disengage from these dependencies, researchers have explored training multilingual models on English-only resources and transferring them to low-resource languages. However, its effect is limited by the gap between embedding clusters of different languages. To address this issue, we propose Embedding-Push, Attention-Pull, and Robust targets to transfer English embeddings to virtual multilingual embeddings without semantic loss, thereby improving cross-lingual transferability. Experimental results on mBERT and XLM-R demonstrate that our method significantly outperforms previous works on the zero-shot cross-lingual text classification task and can obtain a better multilingual alignment.

* Published at COLING2022 
Viaarxiv icon

SAMP: A Toolkit for Model Inference with Self-Adaptive Mixed-Precision

Sep 19, 2022
Rong Tian, Zijing Zhao, Weijie Liu, Haoyan Liu, Weiquan Mao, Zhe Zhao, Kimmo Yan

Figure 1 for SAMP: A Toolkit for Model Inference with Self-Adaptive Mixed-Precision
Figure 2 for SAMP: A Toolkit for Model Inference with Self-Adaptive Mixed-Precision
Figure 3 for SAMP: A Toolkit for Model Inference with Self-Adaptive Mixed-Precision
Figure 4 for SAMP: A Toolkit for Model Inference with Self-Adaptive Mixed-Precision

The latest industrial inference engines, such as FasterTransformer1 and TurboTransformers, have verified that half-precision floating point (FP16) and 8-bit integer (INT8) quantization can greatly improve model inference speed. However, the existing FP16 or INT8 quantization methods are too complicated, and improper usage will lead to performance damage greatly. In this paper, we develop a toolkit for users to easily quantize their models for inference, in which a Self-Adaptive Mixed-Precision (SAMP) is proposed to automatically control quantization rate by a mixed-precision architecture to balance efficiency and performance. Experimental results show that our SAMP toolkit has a higher speedup than PyTorch and FasterTransformer while ensuring the required performance. In addition, SAMP is based on a modular design, decoupling the tokenizer, embedding, encoder and target layers, which allows users to handle various downstream tasks and can be seamlessly integrated into PyTorch.

* 6 pages 
Viaarxiv icon

On the Characterizations of OTFS Modulation over multipath Rapid Fading Channel

Mar 30, 2021
Haoyan Liu, Yanming Liu, Min Yang, Qiongjie Zhang

Figure 1 for On the Characterizations of OTFS Modulation over multipath Rapid Fading Channel
Figure 2 for On the Characterizations of OTFS Modulation over multipath Rapid Fading Channel
Figure 3 for On the Characterizations of OTFS Modulation over multipath Rapid Fading Channel
Figure 4 for On the Characterizations of OTFS Modulation over multipath Rapid Fading Channel

Orthogonal time frequency space (OTFS) modulation has been confirmed to provide significant performance advantages against Doppler in high-mobility scenarios. The core feature of OTFS is that the time-variant channel is converted into a non-fading 2D channel in the delay-Doppler (DD) domain so that all symbols experience the same channel gain. In now available literature, the channel is assumed to be quasi-static over an OTFS frame. As for more practical channels, the input-output relation will be time-variant as the environment or medium changes. In this paper, we analyze the characterizations of OTFS modulation over a more general multipath channel, where the signal of each path has experienced a unique rapid fading. First, we derive the explicit input-output relationship of OTFS in the DD domain for the case of ideal pulse and rectangular pulse. It is shown that the rapid fading will produce extra Doppler dispersion without impacting on delay domain. We next demonstrate that OTFS can be interpreted as an efficient time diversity technology that combines space-time encoding and interleaving. Simulation results reveal that OTFS is insensitive to rapid fading and still outperforms orthogonal frequency-division multiplexing (OFDM) in these types of channels.

Viaarxiv icon

A Split-and-Recombine Approach for Follow-up Query Analysis

Sep 19, 2019
Qian Liu, Bei Chen, Haoyan Liu, Lei Fang, Jian-Guang Lou, Bin Zhou, Dongmei Zhang

Figure 1 for A Split-and-Recombine Approach for Follow-up Query Analysis
Figure 2 for A Split-and-Recombine Approach for Follow-up Query Analysis
Figure 3 for A Split-and-Recombine Approach for Follow-up Query Analysis
Figure 4 for A Split-and-Recombine Approach for Follow-up Query Analysis

Context-dependent semantic parsing has proven to be an important yet challenging task. To leverage the advances in context-independent semantic parsing, we propose to perform follow-up query analysis, aiming to restate context-dependent natural language queries with contextual information. To accomplish the task, we propose STAR, a novel approach with a well-designed two-phase process. It is parser-independent and able to handle multifarious follow-up scenarios in different domains. Experiments on the FollowUp dataset show that STAR outperforms the state-of-the-art baseline by a large margin of nearly 8%. The superiority on parsing results verifies the feasibility of follow-up query analysis. We also explore the extensibility of STAR on the SQA dataset, which is very promising.

* Accepted by EMNLP 2019 
Viaarxiv icon

A Novel Demodulation and Estimation Algorithm for Blackout Communication: Extract Principal Components with Deep Learning

May 30, 2019
Haoyan Liu, Yanming Liu, Ming Yang, Xiaoping Li

Figure 1 for A Novel Demodulation and Estimation Algorithm for Blackout Communication: Extract Principal Components with Deep Learning
Figure 2 for A Novel Demodulation and Estimation Algorithm for Blackout Communication: Extract Principal Components with Deep Learning
Figure 3 for A Novel Demodulation and Estimation Algorithm for Blackout Communication: Extract Principal Components with Deep Learning
Figure 4 for A Novel Demodulation and Estimation Algorithm for Blackout Communication: Extract Principal Components with Deep Learning

For reentry or near space communication, owing to the influence of the time-varying plasma sheath channel environment, the received IQ baseband signals are severely rotated on the constellation. Researches have shown that the frequency of electron density varies from 20kHz to 100 kHz which is on the same order as the symbol rate of most TT\&C communication systems and a mass of bandwidth will be consumed to track the time-varying channel with traditional estimation. In this paper, motivated by principal curve analysis, we propose a deep learning (DL) algorithm which called symmetric manifold network (SMN) to extract the curves on the constellation and classify the signals based on the curves. The key advantage is that SMN can achieve joint optimization of demodulation and channel estimation. From our simulation results, the new algorithm significantly reduces the symbol error rate (SER) compared to existing algorithms and enables accurate estimation of fading with extremely high bandwith utilization rate.

Viaarxiv icon

r-Instance Learning for Missing People Tweets Identification

Jun 05, 2018
Yang Yang, Haoyan Liu, Xia Hu, Jiawei Zhang, Xiaoming Zhang, Zhoujun Li, Philip S. Yu

Figure 1 for r-Instance Learning for Missing People Tweets Identification
Figure 2 for r-Instance Learning for Missing People Tweets Identification
Figure 3 for r-Instance Learning for Missing People Tweets Identification
Figure 4 for r-Instance Learning for Missing People Tweets Identification

The number of missing people (i.e., people who get lost) greatly increases in recent years. It is a serious worldwide problem, and finding the missing people consumes a large amount of social resources. In tracking and finding these missing people, timely data gathering and analysis actually play an important role. With the development of social media, information about missing people can get propagated through the web very quickly, which provides a promising way to solve the problem. The information in online social media is usually of heterogeneous categories, involving both complex social interactions and textual data of diverse structures. Effective fusion of these different types of information for addressing the missing people identification problem can be a great challenge. Motivated by the multi-instance learning problem and existing social science theory of "homophily", in this paper, we propose a novel r-instance (RI) learning model.

* 10 pages, 6 figures. arXiv admin note: text overlap with arXiv:1805.10617 
Viaarxiv icon