Alert button
Picture for Chong Deng

Chong Deng

Alert button

Loss Masking Is Not Needed in Decoder-only Transformer for Discrete-token Based ASR

Nov 08, 2023
Qian Chen, Wen Wang, Qinglin Zhang, Siqi Zheng, Shiliang Zhang, Chong Deng, Yukun Ma, Hai Yu, Jiaqing Liu, Chong Zhang

Recently, unified speech-text models, such as SpeechGPT, VioLA, and AudioPaLM, have achieved remarkable performance on speech tasks. These models convert continuous speech signals into discrete tokens (speech discretization) and merge text and speech tokens into a shared vocabulary. Then they train a single decoder-only Transformer on a mixture of speech tasks. Specifically, all these models utilize Loss Masking on the input speech tokens for the ASR task, which means that these models do not explicitly model the dependency between the speech tokens. In this paper, we attempt to model the sequence of speech tokens in an autoregressive manner like text. However, we find that applying the conventional cross-entropy loss on input speech tokens does not consistently improve the ASR performance over Loss Masking. Therefore, we propose a novel approach denoted Smoothed Label Distillation (SLD), which introduces a KL divergence loss with smoothed labels on the input speech tokens to effectively model speech tokens. Experiments demonstrate that our SLD approach alleviates the limitations of the cross-entropy loss and consistently outperforms Loss Masking for decoder-only Transformer based ASR using different speech discretization methods.

* 5 pages, submitted to ICASSP 2024 
Viaarxiv icon

Improving Long Document Topic Segmentation Models With Enhanced Coherence Modeling

Oct 23, 2023
Hai Yu, Chong Deng, Qinglin Zhang, Jiaqing Liu, Qian Chen, Wen Wang

Figure 1 for Improving Long Document Topic Segmentation Models With Enhanced Coherence Modeling
Figure 2 for Improving Long Document Topic Segmentation Models With Enhanced Coherence Modeling
Figure 3 for Improving Long Document Topic Segmentation Models With Enhanced Coherence Modeling
Figure 4 for Improving Long Document Topic Segmentation Models With Enhanced Coherence Modeling

Topic segmentation is critical for obtaining structured documents and improving downstream tasks such as information retrieval. Due to its ability of automatically exploring clues of topic shift from abundant labeled data, recent supervised neural models have greatly promoted the development of long document topic segmentation, but leaving the deeper relationship between coherence and topic segmentation underexplored. Therefore, this paper enhances the ability of supervised models to capture coherence from both logical structure and semantic similarity perspectives to further improve the topic segmentation performance, proposing Topic-aware Sentence Structure Prediction (TSSP) and Contrastive Semantic Similarity Learning (CSSL). Specifically, the TSSP task is proposed to force the model to comprehend structural information by learning the original relations between adjacent sentences in a disarrayed document, which is constructed by jointly disrupting the original document at topic and sentence levels. Moreover, we utilize inter- and intra-topic information to construct contrastive samples and design the CSSL objective to ensure that the sentences representations in the same topic have higher similarity, while those in different topics are less similar. Extensive experiments show that the Longformer with our approach significantly outperforms old state-of-the-art (SOTA) methods. Our approach improve $F_1$ of old SOTA by 3.42 (73.74 -> 77.16) and reduces $P_k$ by 1.11 points (15.0 -> 13.89) on WIKI-727K and achieves an average relative reduction of 4.3% on $P_k$ on WikiSection. The average relative $P_k$ drop of 8.38% on two out-of-domain datasets also demonstrates the robustness of our approach.

* Accepted by EMNLP 2023. Codes is available at https://github.com/alibaba-damo-academy/SpokenNLP/ 
Viaarxiv icon

Improving BERT with Hybrid Pooling Network and Drop Mask

Jul 14, 2023
Qian Chen, Wen Wang, Qinglin Zhang, Chong Deng, Ma Yukun, Siqi Zheng

Figure 1 for Improving BERT with Hybrid Pooling Network and Drop Mask
Figure 2 for Improving BERT with Hybrid Pooling Network and Drop Mask
Figure 3 for Improving BERT with Hybrid Pooling Network and Drop Mask
Figure 4 for Improving BERT with Hybrid Pooling Network and Drop Mask

Transformer-based pre-trained language models, such as BERT, achieve great success in various natural language understanding tasks. Prior research found that BERT captures a rich hierarchy of linguistic information at different layers. However, the vanilla BERT uses the same self-attention mechanism for each layer to model the different contextual features. In this paper, we propose a HybridBERT model which combines self-attention and pooling networks to encode different contextual features in each layer. Additionally, we propose a simple DropMask method to address the mismatch between pre-training and fine-tuning caused by excessive use of special mask tokens during Masked Language Modeling pre-training. Experiments show that HybridBERT outperforms BERT in pre-training with lower loss, faster training speed (8% relative), lower memory cost (13% relative), and also in transfer learning with 1.5% relative higher accuracies on downstream tasks. Additionally, DropMask improves accuracies of BERT on downstream tasks across various masking rates.

* 7 pages, 2 figures 
Viaarxiv icon

Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings

May 18, 2023
Qian Chen, Wen Wang, Qinglin Zhang, Siqi Zheng, Chong Deng, Hai Yu, Jiaqing Liu, Yukun Ma, Chong Zhang

Figure 1 for Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings
Figure 2 for Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings
Figure 3 for Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings
Figure 4 for Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings

Prior studies diagnose the anisotropy problem in sentence representations from pre-trained language models, e.g., BERT, without fine-tuning. Our analysis reveals that the sentence embeddings from BERT suffer from a bias towards uninformative words, limiting the performance in semantic textual similarity (STS) tasks. To address this bias, we propose a simple and efficient unsupervised approach, Diagonal Attention Pooling (Ditto), which weights words with model-based importance estimations and computes the weighted average of word representations from pre-trained models as sentence embeddings. Ditto can be easily applied to any pre-trained language model as a postprocessing operation. Compared to prior sentence embedding approaches, Ditto does not add parameters nor requires any learning. Empirical evaluations demonstrate that our proposed Ditto can alleviate the anisotropy problem and improve various pre-trained models on STS tasks.

* 7 pages 
Viaarxiv icon

MUG: A General Meeting Understanding and Generation Benchmark

Mar 27, 2023
Qinglin Zhang, Chong Deng, Jiaqing Liu, Hai Yu, Qian Chen, Wen Wang, Zhijie Yan, Jinglin Liu, Yi Ren, Zhou Zhao

Figure 1 for MUG: A General Meeting Understanding and Generation Benchmark
Figure 2 for MUG: A General Meeting Understanding and Generation Benchmark
Figure 3 for MUG: A General Meeting Understanding and Generation Benchmark
Figure 4 for MUG: A General Meeting Understanding and Generation Benchmark

Listening to long video/audio recordings from video conferencing and online courses for acquiring information is extremely inefficient. Even after ASR systems transcribe recordings into long-form spoken language documents, reading ASR transcripts only partly speeds up seeking information. It has been observed that a range of NLP applications, such as keyphrase extraction, topic segmentation, and summarization, significantly improve users' efficiency in grasping important information. The meeting scenario is among the most valuable scenarios for deploying these spoken language processing (SLP) capabilities. However, the lack of large-scale public meeting datasets annotated for these SLP tasks severely hinders their advancement. To prompt SLP advancement, we establish a large-scale general Meeting Understanding and Generation Benchmark (MUG) to benchmark the performance of a wide range of SLP tasks, including topic segmentation, topic-level and session-level extractive summarization and topic title generation, keyphrase extraction, and action item detection. To facilitate the MUG benchmark, we construct and release a large-scale meeting dataset for comprehensive long-form SLP development, the AliMeeting4MUG Corpus, which consists of 654 recorded Mandarin meeting sessions with diverse topic coverage, with manual annotations for SLP tasks on manual transcripts of meeting recordings. To the best of our knowledge, the AliMeeting4MUG Corpus is so far the largest meeting corpus in scale and facilitates most SLP tasks. In this paper, we provide a detailed introduction of this corpus, SLP tasks and evaluation methods, baseline systems and their performance.

* Paper accepted to the 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023), Rhodes, Greece 
Viaarxiv icon

Meeting Action Item Detection with Regularized Context Modeling

Mar 27, 2023
Jiaqing Liu, Chong Deng, Qinglin Zhang, Qian Chen, Wen Wang

Figure 1 for Meeting Action Item Detection with Regularized Context Modeling
Figure 2 for Meeting Action Item Detection with Regularized Context Modeling
Figure 3 for Meeting Action Item Detection with Regularized Context Modeling
Figure 4 for Meeting Action Item Detection with Regularized Context Modeling

Meetings are increasingly important for collaborations. Action items in meeting transcripts are crucial for managing post-meeting to-do tasks, which usually are summarized laboriously. The Action Item Detection task aims to automatically detect meeting content associated with action items. However, datasets manually annotated with action item detection labels are scarce and in small scale. We construct and release the first Chinese meeting corpus with manual action item annotations. In addition, we propose a Context-Drop approach to utilize both local and global contexts by contrastive learning, and achieve better accuracy and robustness for action item detection. We also propose a Lightweight Model Ensemble method to exploit different pre-trained models. Experimental results on our Chinese meeting corpus and the English AMI corpus demonstrate the effectiveness of the proposed approaches.

* 5 pages, 2 figures. Paper accepted to the 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023), Rhodes, Greece 
Viaarxiv icon

Overview of the ICASSP 2023 General Meeting Understanding and Generation Challenge (MUG)

Mar 24, 2023
Qinglin Zhang, Chong Deng, Jiaqing Liu, Hai Yu, Qian Chen, Wen Wang, Zhijie Yan, Jinglin Liu, Yi Ren, Zhou Zhao

Figure 1 for Overview of the ICASSP 2023 General Meeting Understanding and Generation Challenge (MUG)
Figure 2 for Overview of the ICASSP 2023 General Meeting Understanding and Generation Challenge (MUG)

ICASSP2023 General Meeting Understanding and Generation Challenge (MUG) focuses on prompting a wide range of spoken language processing (SLP) research on meeting transcripts, as SLP applications are critical to improve users' efficiency in grasping important information in meetings. MUG includes five tracks, including topic segmentation, topic-level and session-level extractive summarization, topic title generation, keyphrase extraction, and action item detection. To facilitate MUG, we construct and release a large-scale meeting dataset, the AliMeeting4MUG Corpus.

* Paper accepted to the 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023), Rhodes, Greece 
Viaarxiv icon

Weighted Sampling for Masked Language Modeling

Feb 28, 2023
Linhan Zhang, Qian Chen, Wen Wang, Chong Deng, Xin Cao, Kongzhang Hao, Yuxin Jiang, Wei Wang

Figure 1 for Weighted Sampling for Masked Language Modeling
Figure 2 for Weighted Sampling for Masked Language Modeling
Figure 3 for Weighted Sampling for Masked Language Modeling
Figure 4 for Weighted Sampling for Masked Language Modeling

Masked Language Modeling (MLM) is widely used to pretrain language models. The standard random masking strategy in MLM causes the pre-trained language models (PLMs) to be biased toward high-frequency tokens. Representation learning of rare tokens is poor and PLMs have limited performance on downstream tasks. To alleviate this frequency bias issue, we propose two simple and effective Weighted Sampling strategies for masking tokens based on the token frequency and training loss. We apply these two strategies to BERT and obtain Weighted-Sampled BERT (WSBERT). Experiments on the Semantic Textual Similarity benchmark (STS) show that WSBERT significantly improves sentence embeddings over BERT. Combining WSBERT with calibration methods and prompt learning further improves sentence embeddings. We also investigate fine-tuning WSBERT on the GLUE benchmark and show that Weighted Sampling also improves the transfer learning capability of the backbone PLM. We further analyze and provide insights into how WSBERT improves token embeddings.

* 2023 IEEE International Conference on Acoustics, Speech and Signal Processing  
* 6 pages, 2 figures 
Viaarxiv icon

MDERank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction

Oct 13, 2021
Linhan Zhang, Qian Chen, Wen Wang, Chong Deng, Shiliang Zhang, Bing Li, Wei Wang, Xin Cao

Figure 1 for MDERank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction
Figure 2 for MDERank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction
Figure 3 for MDERank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction
Figure 4 for MDERank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction

Keyphrases are phrases in a document providing a concise summary of core content, helping readers to understand what the article is talking about in a minute. However, existing unsupervised works are not robust enough to handle various types of documents owing to the mismatch of sequence length for comparison. In this paper, we propose a novel unsupervised keyword extraction method by leveraging the BERT-based model to select and rank candidate keyphrases with a MASK strategy. In addition, we further enhance the model, denoted as Keyphrases Extraction BERT (KPEBERT), via designing a compatible self-supervised task and conducting a contrast learning. We conducted extensive experimental evaluation to demonstrate the superiority and robustness of the proposed method as well as the effectiveness of KPEBERT.

* 13 pages, 5 figures 
Viaarxiv icon