Alert button
Picture for Weiqing Wang

Weiqing Wang

Alert button

End-to-end Online Speaker Diarization with Target Speaker Tracking

Oct 12, 2023
Weiqing Wang, Ming Li

This paper proposes an online target speaker voice activity detection system for speaker diarization tasks, which does not require a priori knowledge from the clustering-based diarization system to obtain the target speaker embeddings. By adapting the conventional target speaker voice activity detection for real-time operation, this framework can identify speaker activities using self-generated embeddings, resulting in consistent performance without permutation inconsistencies in the inference phase. During the inference process, we employ a front-end model to extract the frame-level speaker embeddings for each coming block of a signal. Next, we predict the detection state of each speaker based on these frame-level speaker embeddings and the previously estimated target speaker embedding. Then, the target speaker embeddings are updated by aggregating these frame-level speaker embeddings according to the predictions in the current block. Our model predicts the results for each block and updates the target speakers' embeddings until reaching the end of the signal. Experimental results show that the proposed method outperforms the offline clustering-based diarization system on the DIHARD III and AliMeeting datasets. The proposed method is further extended to multi-channel data, which achieves similar performance with the state-of-the-art offline diarization systems.

* Submitted to IEEE/ACM Transactions on Audio, Speech, and Language Processing 
Viaarxiv icon

The DKU-DUKEECE System for the Manipulation Region Location Task of ADD 2023

Aug 20, 2023
Zexin Cai, Weiqing Wang, Yikang Wang, Ming Li

Figure 1 for The DKU-DUKEECE System for the Manipulation Region Location Task of ADD 2023
Figure 2 for The DKU-DUKEECE System for the Manipulation Region Location Task of ADD 2023
Figure 3 for The DKU-DUKEECE System for the Manipulation Region Location Task of ADD 2023
Figure 4 for The DKU-DUKEECE System for the Manipulation Region Location Task of ADD 2023

This paper introduces our system designed for Track 2, which focuses on locating manipulated regions, in the second Audio Deepfake Detection Challenge (ADD 2023). Our approach involves the utilization of multiple detection systems to identify splicing regions and determine their authenticity. Specifically, we train and integrate two frame-level systems: one for boundary detection and the other for deepfake detection. Additionally, we employ a third VAE model trained exclusively on genuine data to determine the authenticity of a given audio clip. Through the fusion of these three systems, our top-performing solution for the ADD challenge achieves an impressive 82.23% sentence accuracy and an F1 score of 60.66%. This results in a final ADD score of 0.6713, securing the first rank in Track 2 of ADD 2023.

* The DKU-DukeECE system description to Task 2 of Audio Deepfake Detection Challenge (ADD 2023) 
Viaarxiv icon

The DKU-MSXF Diarization System for the VoxCeleb Speaker Recognition Challenge 2023

Aug 17, 2023
Ming Cheng, Weiqing Wang, Xiaoyi Qin, Yuke Lin, Ning Jiang, Guoqing Zhao, Ming Li

Figure 1 for The DKU-MSXF Diarization System for the VoxCeleb Speaker Recognition Challenge 2023
Figure 2 for The DKU-MSXF Diarization System for the VoxCeleb Speaker Recognition Challenge 2023
Figure 3 for The DKU-MSXF Diarization System for the VoxCeleb Speaker Recognition Challenge 2023
Figure 4 for The DKU-MSXF Diarization System for the VoxCeleb Speaker Recognition Challenge 2023

This paper describes the DKU-MSXF submission to track 4 of the VoxCeleb Speaker Recognition Challenge 2023 (VoxSRC-23). Our system pipeline contains voice activity detection, clustering-based diarization, overlapped speech detection, and target-speaker voice activity detection, where each procedure has a fused output from 3 sub-models. Finally, we fuse different clustering-based and TSVAD-based diarization systems using DOVER-Lap and achieve the 4.30% diarization error rate (DER), which ranks first place on track 4 of the challenge leaderboard.

Viaarxiv icon

Generating Faithful Text From a Knowledge Graph with Noisy Reference Text

Aug 12, 2023
Tahsina Hashem, Weiqing Wang, Derry Tanti Wijaya, Mohammed Eunus Ali, Yuan-Fang Li

Figure 1 for Generating Faithful Text From a Knowledge Graph with Noisy Reference Text
Figure 2 for Generating Faithful Text From a Knowledge Graph with Noisy Reference Text
Figure 3 for Generating Faithful Text From a Knowledge Graph with Noisy Reference Text
Figure 4 for Generating Faithful Text From a Knowledge Graph with Noisy Reference Text

Knowledge Graph (KG)-to-Text generation aims at generating fluent natural-language text that accurately represents the information of a given knowledge graph. While significant progress has been made in this task by exploiting the power of pre-trained language models (PLMs) with appropriate graph structure-aware modules, existing models still fall short of generating faithful text, especially when the ground-truth natural-language text contains additional information that is not present in the graph. In this paper, we develop a KG-to-text generation model that can generate faithful natural-language text from a given graph, in the presence of noisy reference text. Our framework incorporates two core ideas: Firstly, we utilize contrastive learning to enhance the model's ability to differentiate between faithful and hallucinated information in the text, thereby encouraging the decoder to generate text that aligns with the input graph. Secondly, we empower the decoder to control the level of hallucination in the generated text by employing a controllable text generation technique. We evaluate our model's performance through the standard quantitative metrics as well as a ChatGPT-based quantitative and qualitative analysis. Our evaluation demonstrates the superior performance of our model over state-of-the-art KG-to-text models on faithfulness.

Viaarxiv icon

Newton-Cotes Graph Neural Networks: On the Time Evolution of Dynamic Systems

May 24, 2023
Lingbing Guo, Weiqing Wang, Zhuo Chen, Ningyu Zhang, Zequn Sun, Yixuan Lai, Qiang Zhang, Huajun Chen

Figure 1 for Newton-Cotes Graph Neural Networks: On the Time Evolution of Dynamic Systems
Figure 2 for Newton-Cotes Graph Neural Networks: On the Time Evolution of Dynamic Systems
Figure 3 for Newton-Cotes Graph Neural Networks: On the Time Evolution of Dynamic Systems
Figure 4 for Newton-Cotes Graph Neural Networks: On the Time Evolution of Dynamic Systems

Reasoning system dynamics is one of the most important analytical approaches for many scientific studies. With the initial state of a system as input, the recent graph neural networks (GNNs)-based methods are capable of predicting the future state distant in time with high accuracy. Although these methods have diverse designs in modeling the coordinates and interacting forces of the system, we show that they actually share a common paradigm that learns the integration of the velocity over the interval between the initial and terminal coordinates. However, their integrand is constant w.r.t. time. Inspired by this observation, we propose a new approach to predict the integration based on several velocity estimations with Newton-Cotes formulas and prove its effectiveness theoretically. Extensive experiments on several benchmarks empirically demonstrate consistent and significant improvement compared with the state-of-the-art methods.

* Under review 
Viaarxiv icon

DNG: Taxonomy Expansion by Exploring the Intrinsic Directed Structure on Non-Gaussian Space

Feb 22, 2023
Songlin Zhai, Weiqing Wang, Yuanfang Li, Yuan Meng

Figure 1 for DNG: Taxonomy Expansion by Exploring the Intrinsic Directed Structure on Non-Gaussian Space
Figure 2 for DNG: Taxonomy Expansion by Exploring the Intrinsic Directed Structure on Non-Gaussian Space
Figure 3 for DNG: Taxonomy Expansion by Exploring the Intrinsic Directed Structure on Non-Gaussian Space
Figure 4 for DNG: Taxonomy Expansion by Exploring the Intrinsic Directed Structure on Non-Gaussian Space

Taxonomy expansion is the process of incorporating a large number of additional nodes (i.e., "queries") into an existing taxonomy (i.e., "seed"), with the most important step being the selection of appropriate positions for each query. Enormous efforts have been made by exploring the seed's structure. However, existing approaches are deficient in their mining of structural information in two ways: poor modeling of the hierarchical semantics and failure to capture directionality of is-a relation. This paper seeks to address these issues by explicitly denoting each node as the combination of inherited feature (i.e., structural part) and incremental feature (i.e., supplementary part). Specifically, the inherited feature originates from "parent" nodes and is weighted by an inheritance factor. With this node representation, the hierarchy of semantics in taxonomies (i.e., the inheritance and accumulation of features from "parent" to "child") could be embodied. Additionally, based on this representation, the directionality of is-a relation could be easily translated into the irreversible inheritance of features. Inspired by the Darmois-Skitovich Theorem, we implement this irreversibility by a non-Gaussian constraint on the supplementary feature. A log-likelihood learning objective is further utilized to optimize the proposed model (dubbed DNG), whereby the required non-Gaussianity is also theoretically ensured. Extensive experimental results on two real-world datasets verify the superiority of DNG relative to several strong baselines.

* 7figures 
Viaarxiv icon

On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex

Feb 06, 2023
Terry Yue Zhuo, Zhuang Li, Yujin Huang, Fatemeh Shiri, Weiqing Wang, Gholamreza Haffari, Yuan-Fang Li

Figure 1 for On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex
Figure 2 for On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex
Figure 3 for On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex
Figure 4 for On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex

Semantic parsing is a technique aimed at constructing a structured representation of the meaning of a natural-language question. Recent advancements in few-shot language models trained on code have demonstrated superior performance in generating these representations compared to traditional unimodal language models, which are trained on downstream tasks. Despite these advancements, existing fine-tuned neural semantic parsers are susceptible to adversarial attacks on natural-language inputs. While it has been established that the robustness of smaller semantic parsers can be enhanced through adversarial training, this approach is not feasible for large language models in real-world scenarios, as it requires both substantial computational resources and expensive human annotation on in-domain semantic parsing data. This paper presents the first empirical study on the adversarial robustness of a large prompt-based language model of code, \codex. Our results demonstrate that the state-of-the-art (SOTA) code-language models are vulnerable to carefully crafted adversarial examples. To address this challenge, we propose methods for improving robustness without the need for significant amounts of labeled data or heavy computational resources.

* Accepted at EACL2023 (main) 
Viaarxiv icon

HiTSKT: A Hierarchical Transformer Model for Session-Aware Knowledge Tracing

Dec 23, 2022
Fucai Ke, Weiqing Wang, Weicong Tan, Lan Du, Yuan Jin, Yujin Huang, Hongzhi Yin

Figure 1 for HiTSKT: A Hierarchical Transformer Model for Session-Aware Knowledge Tracing
Figure 2 for HiTSKT: A Hierarchical Transformer Model for Session-Aware Knowledge Tracing
Figure 3 for HiTSKT: A Hierarchical Transformer Model for Session-Aware Knowledge Tracing
Figure 4 for HiTSKT: A Hierarchical Transformer Model for Session-Aware Knowledge Tracing

Knowledge tracing (KT) aims to leverage students' learning histories to estimate their mastery levels on a set of pre-defined skills, based on which the corresponding future performance can be accurately predicted. In practice, a student's learning history comprises answers to sets of massed questions, each known as a session, rather than merely being a sequence of independent answers. Theoretically, within and across these sessions, students' learning dynamics can be very different. Therefore, how to effectively model the dynamics of students' knowledge states within and across the sessions is crucial for handling the KT problem. Most existing KT models treat student's learning records as a single continuing sequence, without capturing the sessional shift of students' knowledge state. To address the above issue, we propose a novel hierarchical transformer model, named HiTSKT, comprises an interaction(-level) encoder to capture the knowledge a student acquires within a session, and a session(-level) encoder to summarise acquired knowledge across the past sessions. To predict an interaction in the current session, a knowledge retriever integrates the summarised past-session knowledge with the previous interactions' information into proper knowledge representations. These representations are then used to compute the student's current knowledge state. Additionally, to model the student's long-term forgetting behaviour across the sessions, a power-law-decay attention mechanism is designed and deployed in the session encoder, allowing it to emphasize more on the recent sessions. Extensive experiments on three public datasets demonstrate that HiTSKT achieves new state-of-the-art performance on all the datasets compared with six state-of-the-art KT models.

Viaarxiv icon

Target-Speaker Voice Activity Detection via Sequence-to-Sequence Prediction

Nov 03, 2022
Ming Cheng, Weiqing Wang, Yucong Zhang, Xiaoyi Qin, Ming Li

Figure 1 for Target-Speaker Voice Activity Detection via Sequence-to-Sequence Prediction
Figure 2 for Target-Speaker Voice Activity Detection via Sequence-to-Sequence Prediction
Figure 3 for Target-Speaker Voice Activity Detection via Sequence-to-Sequence Prediction
Figure 4 for Target-Speaker Voice Activity Detection via Sequence-to-Sequence Prediction

Target-speaker voice activity detection is currently a promising approach for speaker diarization in complex acoustic environments. This paper presents a novel Sequence-to-Sequence Target-Speaker Voice Activity Detection (Seq2Seq-TSVAD) method that can efficiently address the joint modeling of large-scale speakers and predict high-resolution voice activities. Experimental results show that larger speaker capacity and higher output resolution can significantly reduce the diarization error rate (DER), which achieves the new state-of-the-art performance of 4.55% on the VoxConverse test set and 10.77% on Track 1 of the DIHARD-III evaluation set under the widely-used evaluation metrics.

* submitted to ICASSP2023 
Viaarxiv icon