Alert button
Picture for Peng Hu

Peng Hu

Alert button

Decoupled Contrastive Multi-view Clustering with High-order Random Walks

Aug 22, 2023
Yiding Lu, Yijie Lin, Mouxing Yang, Dezhong Peng, Peng Hu, Xi Peng

Figure 1 for Decoupled Contrastive Multi-view Clustering with High-order Random Walks
Figure 2 for Decoupled Contrastive Multi-view Clustering with High-order Random Walks
Figure 3 for Decoupled Contrastive Multi-view Clustering with High-order Random Walks
Figure 4 for Decoupled Contrastive Multi-view Clustering with High-order Random Walks

In recent, some robust contrastive multi-view clustering (MvC) methods have been proposed, which construct data pairs from neighborhoods to alleviate the false negative issue, i.e., some intra-cluster samples are wrongly treated as negative pairs. Although promising performance has been achieved by these methods, the false negative issue is still far from addressed and the false positive issue emerges because all in- and out-of-neighborhood samples are simply treated as positive and negative, respectively. To address the issues, we propose a novel robust method, dubbed decoupled contrastive multi-view clustering with high-order random walks (DIVIDE). In brief, DIVIDE leverages random walks to progressively identify data pairs in a global instead of local manner. As a result, DIVIDE could identify in-neighborhood negatives and out-of-neighborhood positives. Moreover, DIVIDE embraces a novel MvC architecture to perform inter- and intra-view contrastive learning in different embedding spaces, thus boosting clustering performance and embracing the robustness against missing views. To verify the efficacy of DIVIDE, we carry out extensive experiments on four benchmark datasets comparing with nine state-of-the-art MvC methods in both complete and incomplete MvC settings.

Viaarxiv icon

Noisy-Correspondence Learning for Text-to-Image Person Re-identification

Aug 19, 2023
Yang Qin, Yingke Chen, Dezhong Peng, Xi Peng, Joey Tianyi Zhou, Peng Hu

Figure 1 for Noisy-Correspondence Learning for Text-to-Image Person Re-identification
Figure 2 for Noisy-Correspondence Learning for Text-to-Image Person Re-identification
Figure 3 for Noisy-Correspondence Learning for Text-to-Image Person Re-identification
Figure 4 for Noisy-Correspondence Learning for Text-to-Image Person Re-identification

Text-to-image person re-identification (TIReID) is a compelling topic in the cross-modal community, which aims to retrieve the target person based on a textual query. Although numerous TIReID methods have been proposed and achieved promising performance, they implicitly assume the training image-text pairs are correctly aligned, which is not always the case in real-world scenarios. In practice, the image-text pairs inevitably exist under-correlated or even false-correlated, a.k.a noisy correspondence (NC), due to the low quality of the images and annotation errors. To address this problem, we propose a novel Robust Dual Embedding method (RDE) that can learn robust visual-semantic associations even with NC. Specifically, RDE consists of two main components: 1) A Confident Consensus Division (CCD) module that leverages the dual-grained decisions of dual embedding modules to obtain a consensus set of clean training data, which enables the model to learn correct and reliable visual-semantic associations. 2) A Triplet-Alignment Loss (TAL) relaxes the conventional triplet-ranking loss with hardest negatives, which tends to rapidly overfit NC, to a log-exponential upper bound over all negatives, thus preventing the model from overemphasizing false image-text pairs. We conduct extensive experiments on three public benchmarks, namely CUHK-PEDES, ICFG-PEDES, and RSTPReID, to evaluate the performance and robustness of our RDE. Our method achieves state-of-the-art results both with and without synthetic noisy correspondences on all three datasets.

Viaarxiv icon

Single-shot 3D photoacoustic computed tomography with a densely packed array for transcranial functional imaging

Jun 26, 2023
Rui Cao, Yilin Luo, Jinhua Xu, Xiaofei Luo, Ku Geng, Yousuf Aborahama, Manxiu Cui, Samuel Davis, Shuai Na, Xin Tong, Cindy Liu, Karteek Sastry, Konstantin Maslov, Peng Hu, Yide Zhang, Li Lin, Yang Zhang, Lihong V. Wang

Figure 1 for Single-shot 3D photoacoustic computed tomography with a densely packed array for transcranial functional imaging
Figure 2 for Single-shot 3D photoacoustic computed tomography with a densely packed array for transcranial functional imaging
Figure 3 for Single-shot 3D photoacoustic computed tomography with a densely packed array for transcranial functional imaging
Figure 4 for Single-shot 3D photoacoustic computed tomography with a densely packed array for transcranial functional imaging

Photoacoustic computed tomography (PACT) is emerging as a new technique for functional brain imaging, primarily due to its capabilities in label-free hemodynamic imaging. Despite its potential, the transcranial application of PACT has encountered hurdles, such as acoustic attenuations and distortions by the skull and limited light penetration through the skull. To overcome these challenges, we have engineered a PACT system that features a densely packed hemispherical ultrasonic transducer array with 3072 channels, operating at a central frequency of 1 MHz. This system allows for single-shot 3D imaging at a rate equal to the laser repetition rate, such as 20 Hz. We have achieved a single-shot light penetration depth of approximately 9 cm in chicken breast tissue utilizing a 750 nm laser (withstanding 3295-fold light attenuation and still retaining an SNR of 74) and successfully performed transcranial imaging through an ex vivo human skull using a 1064 nm laser. Moreover, we have proven the capacity of our system to perform single-shot 3D PACT imaging in both tissue phantoms and human subjects. These results suggest that our PACT system is poised to unlock potential for real-time, in vivo transcranial functional imaging in humans.

Viaarxiv icon

Semantic Invariant Multi-view Clustering with Fully Incomplete Information

May 22, 2023
Pengxin Zeng, Mouxing Yang, Yiding Lu, Changqing Zhang, Peng Hu, Xi Peng

Figure 1 for Semantic Invariant Multi-view Clustering with Fully Incomplete Information
Figure 2 for Semantic Invariant Multi-view Clustering with Fully Incomplete Information
Figure 3 for Semantic Invariant Multi-view Clustering with Fully Incomplete Information
Figure 4 for Semantic Invariant Multi-view Clustering with Fully Incomplete Information

Robust multi-view learning with incomplete information has received significant attention due to issues such as incomplete correspondences and incomplete instances that commonly affect real-world multi-view applications. Existing approaches heavily rely on paired samples to realign or impute defective ones, but such preconditions cannot always be satisfied in practice due to the complexity of data collection and transmission. To address this problem, we present a novel framework called SeMantic Invariance LEarning (SMILE) for multi-view clustering with incomplete information that does not require any paired samples. To be specific, we discover the existence of invariant semantic distribution across different views, which enables SMILE to alleviate the cross-view discrepancy to learn consensus semantics without requiring any paired samples. The resulting consensus semantics remains unaffected by cross-view distribution shifts, making them useful for realigning/imputing defective instances and forming clusters. We demonstrate the effectiveness of SMILE through extensive comparison experiments with 13 state-of-the-art baselines on five benchmarks. Our approach improves the clustering accuracy of NoisyMNIST from 19.3\%/23.2\% to 82.7\%/69.0\% when the correspondences/instances are fully incomplete. We will release the code after acceptance.

Viaarxiv icon

Quantification of cervical elasticity during pregnancy based on transvaginal ultrasound imaging and stress measurement

Mar 10, 2023
Peng Hu, Peinan Zhao, Yuan Qu, Konstantin Maslov, Jessica Chubiz, Methodius G. Tuuli, Molly J. Stout, Lihong V. Wang

Figure 1 for Quantification of cervical elasticity during pregnancy based on transvaginal ultrasound imaging and stress measurement
Figure 2 for Quantification of cervical elasticity during pregnancy based on transvaginal ultrasound imaging and stress measurement
Figure 3 for Quantification of cervical elasticity during pregnancy based on transvaginal ultrasound imaging and stress measurement
Figure 4 for Quantification of cervical elasticity during pregnancy based on transvaginal ultrasound imaging and stress measurement

Objective: Strain elastography and shear wave elastography are two commonly used methods to quantify cervical elasticity; however, they have limitations. Strain elastography is effective in showing tissue elasticity distribution in a single image, but the absence of stress information causes difficulty in comparing the results acquired from different imaging sessions. Shear wave elastography is effective in measuring shear wave speed (an intrinsic tissue property correlated with elasticity) in relatively homogeneous tissue, such as in the liver. However, for inhomogeneous tissue in the cervix, the shear wave speed measurement is less robust. To overcome these limitations, we develop a quantitative cervical elastography system by adding a stress sensor to an ultrasound imaging system. Methods: In an imaging session for quantitative cervical elastography, we use the transvaginal ultrasound imaging system to record B-mode images of the cervix showing its deformation and use the stress sensor to record the probe-surface stress simultaneously. We develop a correlation-based automatic feature tracking algorithm to quantify the deformation, from which the strain is quantified. After each imaging session, we calibrate the stress sensor and transform its measurement to true stress. Applying a linear regression to the stress and strain, we obtain an approximation of the cervical Young's modulus. Results: We validate the accuracy and robustness of this elastography system using phantom experiments. Applying this system to pregnant participants, we observe significant softening of the cervix during pregnancy (p-value < 0.001) with the cervical Young's modulus decreasing 3.95% per week. We estimate that geometric mean values of cervical Young's moduli during the first (11 to 13 weeks), second, and third trimesters are 13.07 kPa, 7.59 kPa, and 4.40 kPa, respectively.

* 26 pages, 8 figures, 1 table 
Viaarxiv icon

Correspondence-Free Domain Alignment for Unsupervised Cross-Domain Image Retrieval

Feb 13, 2023
Xu Wang, Dezhong Peng, Ming Yan, Peng Hu

Figure 1 for Correspondence-Free Domain Alignment for Unsupervised Cross-Domain Image Retrieval
Figure 2 for Correspondence-Free Domain Alignment for Unsupervised Cross-Domain Image Retrieval
Figure 3 for Correspondence-Free Domain Alignment for Unsupervised Cross-Domain Image Retrieval
Figure 4 for Correspondence-Free Domain Alignment for Unsupervised Cross-Domain Image Retrieval

Cross-domain image retrieval aims at retrieving images across different domains to excavate cross-domain classificatory or correspondence relationships. This paper studies a less-touched problem of cross-domain image retrieval, i.e., unsupervised cross-domain image retrieval, considering the following practical assumptions: (i) no correspondence relationship, and (ii) no category annotations. It is challenging to align and bridge distinct domains without cross-domain correspondence. To tackle the challenge, we present a novel Correspondence-free Domain Alignment (CoDA) method to effectively eliminate the cross-domain gap through In-domain Self-matching Supervision (ISS) and Cross-domain Classifier Alignment (CCA). To be specific, ISS is presented to encapsulate discriminative information into the latent common space by elaborating a novel self-matching supervision mechanism. To alleviate the cross-domain discrepancy, CCA is proposed to align distinct domain-specific classifiers. Thanks to the ISS and CCA, our method could encode the discrimination into the domain-invariant embedding space for unsupervised cross-domain image retrieval. To verify the effectiveness of the proposed method, extensive experiments are conducted on four benchmark datasets compared with six state-of-the-art methods.

* AAAI 2023 
Viaarxiv icon

Satellite Anomaly Detection Using Variance Based Genetic Ensemble of Neural Networks

Feb 10, 2023
Mohammad Amin Maleki Sadr, Yeying Zhu, Peng Hu

Figure 1 for Satellite Anomaly Detection Using Variance Based Genetic Ensemble of Neural Networks
Figure 2 for Satellite Anomaly Detection Using Variance Based Genetic Ensemble of Neural Networks
Figure 3 for Satellite Anomaly Detection Using Variance Based Genetic Ensemble of Neural Networks
Figure 4 for Satellite Anomaly Detection Using Variance Based Genetic Ensemble of Neural Networks

In this paper, we use a variance-based genetic ensemble (VGE) of Neural Networks (NNs) to detect anomalies in the satellite's historical data. We use an efficient ensemble of the predictions from multiple Recurrent Neural Networks (RNNs) by leveraging each model's uncertainty level (variance). For prediction, each RNN is guided by a Genetic Algorithm (GA) which constructs the optimal structure for each RNN model. However, finding the model uncertainty level is challenging in many cases. Although the Bayesian NNs (BNNs)-based methods are popular for providing the confidence bound of the models, they cannot be employed in complex NN structures as they are computationally intractable. This paper uses the Monte Carlo (MC) dropout as an approximation version of BNNs. Then these uncertainty levels and each predictive model suggested by GA are used to generate a new model, which is then used for forecasting the TS and AD. Simulation results show that the forecasting and AD capability of the ensemble model outperforms existing approaches.

Viaarxiv icon

Incomplete Multi-view Clustering via Prototype-based Imputation

Jan 30, 2023
Haobin Li, Yunfan Li, Mouxing Yang, Peng Hu, Dezhong Peng, Xi Peng

Figure 1 for Incomplete Multi-view Clustering via Prototype-based Imputation
Figure 2 for Incomplete Multi-view Clustering via Prototype-based Imputation
Figure 3 for Incomplete Multi-view Clustering via Prototype-based Imputation
Figure 4 for Incomplete Multi-view Clustering via Prototype-based Imputation

In this paper, we study how to achieve two characteristics highly-expected by incomplete multi-view clustering (IMvC). Namely, i) instance commonality refers to that within-cluster instances should share a common pattern, and ii) view versatility refers to that cross-view samples should own view-specific patterns. To this end, we design a novel dual-stream model which employs a dual attention layer and a dual contrastive learning loss to learn view-specific prototypes and model the sample-prototype relationship. When the view is missed, our model performs data recovery using the prototypes in the missing view and the sample-prototype relationship inherited from the observed view. Thanks to our dual-stream model, both cluster- and view-specific information could be captured, and thus the instance commonality and view versatility could be preserved to facilitate IMvC. Extensive experiments demonstrate the superiority of our method on six challenging benchmarks compared with 11 approaches. The code will be released.

* 7pages, 6 figures 
Viaarxiv icon

Graph Matching with Bi-level Noisy Correspondence

Dec 22, 2022
Yijie Lin, Mouxing Yang, Jun Yu, Peng Hu, Changqing Zhang, Xi Peng

Figure 1 for Graph Matching with Bi-level Noisy Correspondence
Figure 2 for Graph Matching with Bi-level Noisy Correspondence
Figure 3 for Graph Matching with Bi-level Noisy Correspondence
Figure 4 for Graph Matching with Bi-level Noisy Correspondence

In this paper, we study a novel and widely existing problem in graph matching (GM), namely, Bi-level Noisy Correspondence (BNC), which refers to node-level noisy correspondence (NNC) and edge-level noisy correspondence (ENC). In brief, on the one hand, due to the poor recognizability and viewpoint differences between images, it is inevitable to inaccurately annotate some keypoints with offset and confusion, leading to the mismatch between two associated nodes, i.e., NNC. On the other hand, the noisy node-to-node correspondence will further contaminate the edge-to-edge correspondence, thus leading to ENC. For the BNC challenge, we propose a novel method termed Contrastive Matching with Momentum Distillation. Specifically, the proposed method is with a robust quadratic contrastive loss which enjoys the following merits: i) better exploring the node-to-node and edge-to-edge correlations through a GM customized quadratic contrastive learning paradigm; ii) adaptively penalizing the noisy assignments based on the confidence estimated by the momentum teacher. Extensive experiments on three real-world datasets show the robustness of our model compared with 12 competitive baselines.

Viaarxiv icon