Alert button
Picture for Zhijing Yang

Zhijing Yang

Alert button

Active Mining Sample Pair Semantics for Image-text Matching

Nov 09, 2023
Yongfeng Chena, Jin Liua, Zhijing Yang, Ruihan Chena, Junpeng Tan

Recently, commonsense learning has been a hot topic in image-text matching. Although it can describe more graphic correlations, commonsense learning still has some shortcomings: 1) The existing methods are based on triplet semantic similarity measurement loss, which cannot effectively match the intractable negative in image-text sample pairs. 2) The weak generalization ability of the model leads to the poor effect of image and text matching on large-scale datasets. According to these shortcomings. This paper proposes a novel image-text matching model, called Active Mining Sample Pair Semantics image-text matching model (AMSPS). Compared with the single semantic learning mode of the commonsense learning model with triplet loss function, AMSPS is an active learning idea. Firstly, the proposed Adaptive Hierarchical Reinforcement Loss (AHRL) has diversified learning modes. Its active learning mode enables the model to more focus on the intractable negative samples to enhance the discriminating ability. In addition, AMSPS can also adaptively mine more hidden relevant semantic representations from uncommented items, which greatly improves the performance and generalization ability of the model. Experimental results on Flickr30K and MSCOCO universal datasets show that our proposed method is superior to advanced comparison methods.

Viaarxiv icon

Learning to In-paint: Domain Adaptive Shape Completion for 3D Organ Segmentation

Aug 17, 2023
Mingjin Chen, Yongkang He, Yongyi Lu, Zhijing Yang

We aim at incorporating explicit shape information into current 3D organ segmentation models. Different from previous works, we formulate shape learning as an in-painting task, which is named Masked Label Mask Modeling (MLM). Through MLM, learnable mask tokens are fed into transformer blocks to complete the label mask of organ. To transfer MLM shape knowledge to target, we further propose a novel shape-aware self-distillation with both in-painting reconstruction loss and pseudo loss. Extensive experiments on five public organ segmentation datasets show consistent improvements over prior arts with at least 1.2 points gain in the Dice score, demonstrating the effectiveness of our method in challenging unsupervised domain adaptation scenarios including: (1) In-domain organ segmentation; (2) Unseen domain segmentation and (3) Unseen organ segmentation. We hope this work will advance shape analysis and geometric learning in medical imaging.

Viaarxiv icon

Data-Centric Diet: Effective Multi-center Dataset Pruning for Medical Image Segmentation

Aug 02, 2023
Yongkang He, Mingjin Chen, Zhijing Yang, Yongyi Lu

Figure 1 for Data-Centric Diet: Effective Multi-center Dataset Pruning for Medical Image Segmentation
Figure 2 for Data-Centric Diet: Effective Multi-center Dataset Pruning for Medical Image Segmentation
Figure 3 for Data-Centric Diet: Effective Multi-center Dataset Pruning for Medical Image Segmentation
Figure 4 for Data-Centric Diet: Effective Multi-center Dataset Pruning for Medical Image Segmentation

This paper seeks to address the dense labeling problems where a significant fraction of the dataset can be pruned without sacrificing much accuracy. We observe that, on standard medical image segmentation benchmarks, the loss gradient norm-based metrics of individual training examples applied in image classification fail to identify the important samples. To address this issue, we propose a data pruning method by taking into consideration the training dynamics on target regions using Dynamic Average Dice (DAD) score. To the best of our knowledge, we are among the first to address the data importance in dense labeling tasks in the field of medical image analysis, making the following contributions: (1) investigating the underlying causes with rigorous empirical analysis, and (2) determining effective data pruning approach in dense labeling problems. Our solution can be used as a strong yet simple baseline to select important examples for medical image segmentation with combined data sources.

* Accepted by ICML workshops 2023 
Viaarxiv icon

A Transformer-based Prediction Method for Depth of Anesthesia During Target-controlled Infusion of Propofol and Remifentanil

Aug 02, 2023
Yongkang He, Siyuan Peng, Mingjin Chen, Zhijing Yang, Yuanhui Chen

Figure 1 for A Transformer-based Prediction Method for Depth of Anesthesia During Target-controlled Infusion of Propofol and Remifentanil
Figure 2 for A Transformer-based Prediction Method for Depth of Anesthesia During Target-controlled Infusion of Propofol and Remifentanil
Figure 3 for A Transformer-based Prediction Method for Depth of Anesthesia During Target-controlled Infusion of Propofol and Remifentanil
Figure 4 for A Transformer-based Prediction Method for Depth of Anesthesia During Target-controlled Infusion of Propofol and Remifentanil

Accurately predicting anesthetic effects is essential for target-controlled infusion systems. The traditional (PK-PD) models for Bispectral index (BIS) prediction require manual selection of model parameters, which can be challenging in clinical settings. Recently proposed deep learning methods can only capture general trends and may not predict abrupt changes in BIS. To address these issues, we propose a transformer-based method for predicting the depth of anesthesia (DOA) using drug infusions of propofol and remifentanil. Our method employs long short-term memory (LSTM) and gate residual network (GRN) networks to improve the efficiency of feature fusion and applies an attention mechanism to discover the interactions between the drugs. We also use label distribution smoothing and reweighting losses to address data imbalance. Experimental results show that our proposed method outperforms traditional PK-PD models and previous deep learning methods, effectively predicting anesthetic depth under sudden and deep anesthesia conditions.

Viaarxiv icon

Low Rank Properties for Estimating Microphones Start Time and Sources Emission Time

Jul 22, 2023
Faxian Cao, Yongqiang Cheng, Adil Mehmood Khan, Zhijing Yang, S. M. Ahsan Kazmiand Yingxiu Chang

Figure 1 for Low Rank Properties for Estimating Microphones Start Time and Sources Emission Time
Figure 2 for Low Rank Properties for Estimating Microphones Start Time and Sources Emission Time
Figure 3 for Low Rank Properties for Estimating Microphones Start Time and Sources Emission Time
Figure 4 for Low Rank Properties for Estimating Microphones Start Time and Sources Emission Time

Uncertainty in timing information pertaining to the start time of microphone recordings and sources' emission time pose significant challenges in various applications, such as joint microphones and sources localization. Traditional optimization methods, which directly estimate this unknown timing information (UTIm), often fall short compared to approaches exploiting the low-rank property (LRP). LRP encompasses an additional low-rank structure, facilitating a linear constraint on UTIm to help formulate related low-rank structure information. This method allows us to attain globally optimal solutions for UTIm, given proper initialization. However, the initialization process often involves randomness, leading to suboptimal, local minimum values. This paper presents a novel, combined low-rank approximation (CLRA) method designed to mitigate the effects of this random initialization. We introduce three new LRP variants, underpinned by mathematical proof, which allow the UTIm to draw on a richer pool of low-rank structural information. Utilizing this augmented low-rank structural information from both LRP and the proposed variants, we formulate four linear constraints on the UTIm. Employing the proposed CLRA algorithm, we derive global optimal solutions for the UTIm via these four linear constraints.Experimental results highlight the superior performance of our method over existing state-of-the-art approaches, measured in terms of both the recovery number and reduced estimation errors of UTIm.

* 13 pages for main content; 9 pages for proof of proposed low rank properties; 13 figures 
Viaarxiv icon

NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-World Video Super-Resolution

May 24, 2023
Yexing Song, Meilin Wang, Xiaoyu Xian, Zhijing Yang, Yuming Fan, Yukai Shi

Figure 1 for NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-World Video Super-Resolution
Figure 2 for NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-World Video Super-Resolution
Figure 3 for NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-World Video Super-Resolution
Figure 4 for NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-World Video Super-Resolution

The capability of video super-resolution (VSR) to synthesize high-resolution (HR) video from ideal datasets has been demonstrated in many works. However, applying the VSR model to real-world video with unknown and complex degradation remains a challenging task. First, existing degradation metrics in most VSR methods are not able to effectively simulate real-world noise and blur. On the contrary, simple combinations of classical degradation are used for real-world noise modeling, which led to the VSR model often being violated by out-of-distribution noise. Second, many SR models focus on noise simulation and transfer. Nevertheless, the sampled noise is monotonous and limited. To address the aforementioned problems, we propose a Negatives augmentation strategy for generalized noise modeling in Video Super-Resolution (NegVSR) task. Specifically, we first propose sequential noise generation toward real-world data to extract practical noise sequences. Then, the degeneration domain is widely expanded by negative augmentation to build up various yet challenging real-world noise sets. We further propose the augmented negative guidance loss to learn robust features among augmented negatives effectively. Extensive experiments on real-world datasets (e.g., VideoLQ and FLIR) show that our method outperforms state-of-the-art methods with clear margins, especially in visual quality.

Viaarxiv icon

Are Microphone Signals Alone Sufficient for Joint Microphones and Sources Localization?

May 19, 2023
Faxian Cao, Yongqiang Cheng, Adil Mehmood Khan, Zhijing Yang

Figure 1 for Are Microphone Signals Alone Sufficient for Joint Microphones and Sources Localization?
Figure 2 for Are Microphone Signals Alone Sufficient for Joint Microphones and Sources Localization?

Joint microphones and sources localization can be achieved by using both time of arrival (TOA) and time difference of arrival (TDOA) measurements, even in scenarios where both microphones and sources are asynchronous due to unknown emission time of human voices or sources and unknown recording start time of independent microphones. However, TOA measurements require both microphone signals and the waveform of source signals while TDOA measurements can be obtained using microphone signals alone. In this letter, we explore the sufficiency of using only microphone signals for joint microphones and sources localization by presenting two mapping functions for both TOA and TDOA formulas. Our proposed mapping functions demonstrate that the transformations of TOA and TDOA formulas can be the same, indicating that microphone signals alone are sufficient for joint microphones and sources localization without knowledge of the waveform of source signals. We have validated our proposed mapping functions through both mathematical proof and experimental results.

* 2 figures 
Viaarxiv icon

Justices for Information Bottleneck Theory

May 19, 2023
Faxian Cao, Yongqiang Cheng, Adil Mehmood Khan, Zhijing Yang

Figure 1 for Justices for Information Bottleneck Theory

This study comes as a timely response to mounting criticism of the information bottleneck (IB) theory, injecting fresh perspectives to rectify misconceptions and reaffirm its validity. Firstly, we introduce an auxiliary function to reinterpret the maximal coding rate reduction method as a special yet local optimal case of IB theory. Through this auxiliary function, we clarify the paradox of decreasing mutual information during the application of ReLU activation in deep learning (DL) networks. Secondly, we challenge the doubts about IB theory's applicability by demonstrating its capacity to explain the absence of a compression phase with linear activation functions in hidden layers, when viewed through the lens of the auxiliary function. Lastly, by taking a novel theoretical stance, we provide a new way to interpret the inner organizations of DL networks by using IB theory, aligning them with recent experimental evidence. Thus, this paper serves as an act of justice for IB theory, potentially reinvigorating its standing and application in DL and other fields such as communications and biomedical research.

* 9 pages, 1 figures (4 subfigures) 
Viaarxiv icon

Open-World Pose Transfer via Sequential Test-Time Adaption

Mar 20, 2023
Junyang Chen, Xiaoyu Xian, Zhijing Yang, Tianshui Chen, Yongyi Lu, Yukai Shi, Jinshan Pan, Liang Lin

Figure 1 for Open-World Pose Transfer via Sequential Test-Time Adaption
Figure 2 for Open-World Pose Transfer via Sequential Test-Time Adaption
Figure 3 for Open-World Pose Transfer via Sequential Test-Time Adaption
Figure 4 for Open-World Pose Transfer via Sequential Test-Time Adaption

Pose transfer aims to transfer a given person into a specified posture, has recently attracted considerable attention. A typical pose transfer framework usually employs representative datasets to train a discriminative model, which is often violated by out-of-distribution (OOD) instances. Recently, test-time adaption (TTA) offers a feasible solution for OOD data by using a pre-trained model that learns essential features with self-supervision. However, those methods implicitly make an assumption that all test distributions have a unified signal that can be learned directly. In open-world conditions, the pose transfer task raises various independent signals: OOD appearance and skeleton, which need to be extracted and distributed in speciality. To address this point, we develop a SEquential Test-time Adaption (SETA). In the test-time phrase, SETA extracts and distributes external appearance texture by augmenting OOD data for self-supervised training. To make non-Euclidean similarity among different postures explicit, SETA uses the image representations derived from a person re-identification (Re-ID) model for similarity computation. By addressing implicit posture representation in the test-time sequentially, SETA greatly improves the generalization performance of current pose transfer models. In our experiment, we first show that pose transfer can be applied to open-world applications, including Tiktok reenactment and celebrity motion synthesis.

* We call for a solid pose transfer model that can handle open-world instances beyond a specific dataset 
Viaarxiv icon