Alert button
Picture for Shiliang Pu

Shiliang Pu

Alert button

MProto: Multi-Prototype Network with Denoised Optimal Transport for Distantly Supervised Named Entity Recognition

Oct 12, 2023
Shuhui Wu, Yongliang Shen, Zeqi Tan, Wenqi Ren, Jietian Guo, Shiliang Pu, Weiming Lu

Figure 1 for MProto: Multi-Prototype Network with Denoised Optimal Transport for Distantly Supervised Named Entity Recognition
Figure 2 for MProto: Multi-Prototype Network with Denoised Optimal Transport for Distantly Supervised Named Entity Recognition
Figure 3 for MProto: Multi-Prototype Network with Denoised Optimal Transport for Distantly Supervised Named Entity Recognition
Figure 4 for MProto: Multi-Prototype Network with Denoised Optimal Transport for Distantly Supervised Named Entity Recognition

Distantly supervised named entity recognition (DS-NER) aims to locate entity mentions and classify their types with only knowledge bases or gazetteers and unlabeled corpus. However, distant annotations are noisy and degrade the performance of NER models. In this paper, we propose a noise-robust prototype network named MProto for the DS-NER task. Different from previous prototype-based NER methods, MProto represents each entity type with multiple prototypes to characterize the intra-class variance among entity representations. To optimize the classifier, each token should be assigned an appropriate ground-truth prototype and we consider such token-prototype assignment as an optimal transport (OT) problem. Furthermore, to mitigate the noise from incomplete labeling, we propose a novel denoised optimal transport (DOT) algorithm. Specifically, we utilize the assignment result between Other class tokens and all prototypes to distinguish unlabeled entity tokens from true negatives. Experiments on several DS-NER benchmarks demonstrate that our MProto achieves state-of-the-art performance. The source code is now available on Github.

* Accepted to EMNLP-2023, camera ready version 
Viaarxiv icon

Accelerating Dynamic Network Embedding with Billions of Parameter Updates to Milliseconds

Jun 15, 2023
Haoran Deng, Yang Yang, Jiahe Li, Haoyang Cai, Shiliang Pu, Weihao Jiang

Figure 1 for Accelerating Dynamic Network Embedding with Billions of Parameter Updates to Milliseconds
Figure 2 for Accelerating Dynamic Network Embedding with Billions of Parameter Updates to Milliseconds
Figure 3 for Accelerating Dynamic Network Embedding with Billions of Parameter Updates to Milliseconds
Figure 4 for Accelerating Dynamic Network Embedding with Billions of Parameter Updates to Milliseconds

Network embedding, a graph representation learning method illustrating network topology by mapping nodes into lower-dimension vectors, is challenging to accommodate the ever-changing dynamic graphs in practice. Existing research is mainly based on node-by-node embedding modifications, which falls into the dilemma of efficient calculation and accuracy. Observing that the embedding dimensions are usually much smaller than the number of nodes, we break this dilemma with a novel dynamic network embedding paradigm that rotates and scales the axes of embedding space instead of a node-by-node update. Specifically, we propose the Dynamic Adjacency Matrix Factorization (DAMF) algorithm, which achieves an efficient and accurate dynamic network embedding by rotating and scaling the coordinate system where the network embedding resides with no more than the number of edge modifications changes of node embeddings. Moreover, a dynamic Personalized PageRank is applied to the obtained network embeddings to enhance node embeddings and capture higher-order neighbor information dynamically. Experiments of node classification, link prediction, and graph reconstruction on different-sized dynamic graphs suggest that DAMF advances dynamic network embedding. Further, we unprecedentedly expand dynamic network embedding experiments to billion-edge graphs, where DAMF updates billion-level parameters in less than 10ms.

Viaarxiv icon

Single Domain Dynamic Generalization for Iris Presentation Attack Detection

May 22, 2023
Yachun Li, Jingjing Wang, Yuhui Chen, Di Xie, Shiliang Pu

Figure 1 for Single Domain Dynamic Generalization for Iris Presentation Attack Detection
Figure 2 for Single Domain Dynamic Generalization for Iris Presentation Attack Detection
Figure 3 for Single Domain Dynamic Generalization for Iris Presentation Attack Detection
Figure 4 for Single Domain Dynamic Generalization for Iris Presentation Attack Detection

Iris presentation attack detection (PAD) has achieved great success under intra-domain settings but easily degrades on unseen domains. Conventional domain generalization methods mitigate the gap by learning domain-invariant features. However, they ignore the discriminative information in the domain-specific features. Moreover, we usually face a more realistic scenario with only one single domain available for training. To tackle the above issues, we propose a Single Domain Dynamic Generalization (SDDG) framework, which simultaneously exploits domain-invariant and domain-specific features on a per-sample basis and learns to generalize to various unseen domains with numerous natural images. Specifically, a dynamic block is designed to adaptively adjust the network with a dynamic adaptor. And an information maximization loss is further combined to increase diversity. The whole network is integrated into the meta-learning paradigm. We generate amplitude perturbed images and cover diverse domains with natural images. Therefore, the network can learn to generalize to the perturbed domains in the meta-test phase. Extensive experiments show the proposed method is effective and outperforms the state-of-the-art on LivDet-Iris 2017 dataset.

* ICASSP 2023 Camera Ready 
Viaarxiv icon

Taxonomy Completion with Probabilistic Scorer via Box Embedding

May 19, 2023
Wei Xue, Yongliang Shen, Wenqi Ren, Jietian Guo, Shiliang Pu, Weiming Lu

Figure 1 for Taxonomy Completion with Probabilistic Scorer via Box Embedding
Figure 2 for Taxonomy Completion with Probabilistic Scorer via Box Embedding
Figure 3 for Taxonomy Completion with Probabilistic Scorer via Box Embedding
Figure 4 for Taxonomy Completion with Probabilistic Scorer via Box Embedding

Taxonomy completion, a task aimed at automatically enriching an existing taxonomy with new concepts, has gained significant interest in recent years. Previous works have introduced complex modules, external information, and pseudo-leaves to enrich the representation and unify the matching process of attachment and insertion. While they have achieved good performance, these introductions may have brought noise and unfairness during training and scoring. In this paper, we present TaxBox, a novel framework for taxonomy completion that maps taxonomy concepts to box embeddings and employs two probabilistic scorers for concept attachment and insertion, avoiding the need for pseudo-leaves. Specifically, TaxBox consists of three components: (1) a graph aggregation module to leverage the structural information of the taxonomy and two lightweight decoders that map features to box embedding and capture complex relationships between concepts; (2) two probabilistic scorers that correspond to attachment and insertion operations and ensure the avoidance of pseudo-leaves; and (3) three learning objectives that assist the model in mapping concepts more granularly onto the box embedding space. Experimental results on four real-world datasets suggest that TaxBox outperforms baseline methods by a considerable margin and surpasses previous state-of-art methods to a certain extent.

Viaarxiv icon

Multi-view Adversarial Discriminator: Mine the Non-causal Factors for Object Detection in Unseen Domains

Apr 06, 2023
Mingjun Xu, Lingyun Qin, Weijie Chen, Shiliang Pu, Lei Zhang

Figure 1 for Multi-view Adversarial Discriminator: Mine the Non-causal Factors for Object Detection in Unseen Domains
Figure 2 for Multi-view Adversarial Discriminator: Mine the Non-causal Factors for Object Detection in Unseen Domains
Figure 3 for Multi-view Adversarial Discriminator: Mine the Non-causal Factors for Object Detection in Unseen Domains
Figure 4 for Multi-view Adversarial Discriminator: Mine the Non-causal Factors for Object Detection in Unseen Domains

Domain shift degrades the performance of object detection models in practical applications. To alleviate the influence of domain shift, plenty of previous work try to decouple and learn the domain-invariant (common) features from source domains via domain adversarial learning (DAL). However, inspired by causal mechanisms, we find that previous methods ignore the implicit insignificant non-causal factors hidden in the common features. This is mainly due to the single-view nature of DAL. In this work, we present an idea to remove non-causal factors from common features by multi-view adversarial training on source domains, because we observe that such insignificant non-causal factors may still be significant in other latent spaces (views) due to the multi-mode structure of data. To summarize, we propose a Multi-view Adversarial Discriminator (MAD) based domain generalization model, consisting of a Spurious Correlations Generator (SCG) that increases the diversity of source domain by random augmentation and a Multi-View Domain Classifier (MVDC) that maps features to multiple latent spaces, such that the non-causal factors are removed and the domain-invariant features are purified. Extensive experiments on six benchmarks show our MAD obtains state-of-the-art performance.

* CVPR 2023 (Highlight, top 2.5%). Pytorch vs. MindSpore Code at "https://github.com/K2OKOH/MAD" 
Viaarxiv icon

Rethinking the Approximation Error in 3D Surface Fitting for Point Cloud Normal Estimation

Mar 30, 2023
Hang Du, Xuejun Yan, Jingjing Wang, Di Xie, Shiliang Pu

Figure 1 for Rethinking the Approximation Error in 3D Surface Fitting for Point Cloud Normal Estimation
Figure 2 for Rethinking the Approximation Error in 3D Surface Fitting for Point Cloud Normal Estimation
Figure 3 for Rethinking the Approximation Error in 3D Surface Fitting for Point Cloud Normal Estimation
Figure 4 for Rethinking the Approximation Error in 3D Surface Fitting for Point Cloud Normal Estimation

Most existing approaches for point cloud normal estimation aim to locally fit a geometric surface and calculate the normal from the fitted surface. Recently, learning-based methods have adopted a routine of predicting point-wise weights to solve the weighted least-squares surface fitting problem. Despite achieving remarkable progress, these methods overlook the approximation error of the fitting problem, resulting in a less accurate fitted surface. In this paper, we first carry out in-depth analysis of the approximation error in the surface fitting problem. Then, in order to bridge the gap between estimated and precise surface normals, we present two basic design principles: 1) applies the $Z$-direction Transform to rotate local patches for a better surface fitting with a lower approximation error; 2) models the error of the normal estimation as a learnable term. We implement these two principles using deep neural networks, and integrate them with the state-of-the-art (SOTA) normal estimation methods in a plug-and-play manner. Extensive experiments verify our approaches bring benefits to point cloud normal estimation and push the frontier of state-of-the-art performance on both synthetic and real-world datasets.

* The first two authors contributed equally to this work. The source code are available at https://github.com/hikvision-research/3DVision. Accepted to CVPR 2023 
Viaarxiv icon

1st Place Solution for ECCV 2022 OOD-CV Challenge Object Detection Track

Jan 12, 2023
Wei Zhao, Binbin Chen, Weijie Chen, Shicai Yang, Di Xie, Shiliang Pu, Yueting Zhuang

Figure 1 for 1st Place Solution for ECCV 2022 OOD-CV Challenge Object Detection Track
Figure 2 for 1st Place Solution for ECCV 2022 OOD-CV Challenge Object Detection Track
Figure 3 for 1st Place Solution for ECCV 2022 OOD-CV Challenge Object Detection Track
Figure 4 for 1st Place Solution for ECCV 2022 OOD-CV Challenge Object Detection Track

OOD-CV challenge is an out-of-distribution generalization task. To solve this problem in object detection track, we propose a simple yet effective Generalize-then-Adapt (G&A) framework, which is composed of a two-stage domain generalization part and a one-stage domain adaptation part. The domain generalization part is implemented by a Supervised Model Pretraining stage using source data for model warm-up and a Weakly Semi-Supervised Model Pretraining stage using both source data with box-level label and auxiliary data (ImageNet-1K) with image-level label for performance boosting. The domain adaptation part is implemented as a Source-Free Domain Adaptation paradigm, which only uses the pre-trained model and the unlabeled target data to further optimize in a self-supervised training manner. The proposed G&A framework help us achieve the first place on the object detection leaderboard of the OOD-CV challenge. Code will be released in https://github.com/hikvision-research/OOD-CV.

* Tech Report 
Viaarxiv icon

1st Place Solution for ECCV 2022 OOD-CV Challenge Image Classification Track

Jan 12, 2023
Yilu Guo, Xingyue Shi, Weijie Chen, Shicai Yang, Di Xie, Shiliang Pu, Yueting Zhuang

Figure 1 for 1st Place Solution for ECCV 2022 OOD-CV Challenge Image Classification Track
Figure 2 for 1st Place Solution for ECCV 2022 OOD-CV Challenge Image Classification Track
Figure 3 for 1st Place Solution for ECCV 2022 OOD-CV Challenge Image Classification Track
Figure 4 for 1st Place Solution for ECCV 2022 OOD-CV Challenge Image Classification Track

OOD-CV challenge is an out-of-distribution generalization task. In this challenge, our core solution can be summarized as that Noisy Label Learning Is A Strong Test-Time Domain Adaptation Optimizer. Briefly speaking, our main pipeline can be divided into two stages, a pre-training stage for domain generalization and a test-time training stage for domain adaptation. We only exploit labeled source data in the pre-training stage and only exploit unlabeled target data in the test-time training stage. In the pre-training stage, we propose a simple yet effective Mask-Level Copy-Paste data augmentation strategy to enhance out-of-distribution generalization ability so as to resist shape, pose, context, texture, occlusion, and weather domain shifts in this challenge. In the test-time training stage, we use the pre-trained model to assign noisy label for the unlabeled target data, and propose a Label-Periodically-Updated DivideMix method for noisy label learning. After integrating Test-Time Augmentation and Model Ensemble strategies, our solution ranks the first place on the Image Classification Leaderboard of the OOD-CV Challenge. Code will be released in https://github.com/hikvision-research/OOD-CV.

* Tech Report 
Viaarxiv icon

NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation

Dec 30, 2022
Pengwei Yin, Jiawu Dai, Jingjing Wang, Di Xie, Shiliang Pu

Figure 1 for NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation
Figure 2 for NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation
Figure 3 for NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation
Figure 4 for NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation

Gaze estimation is the fundamental basis for many visual tasks. Yet, the high cost of acquiring gaze datasets with 3D annotations hinders the optimization and application of gaze estimation models. In this work, we propose a novel Head-Eye redirection parametric model based on Neural Radiance Field, which allows dense gaze data generation with view consistency and accurate gaze direction. Moreover, our head-eye redirection parametric model can decouple the face and eyes for separate neural rendering, so it can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction. Thus diverse 3D-aware gaze datasets could be obtained by manipulating the latent code belonging to different face attributions in an unsupervised manner. Extensive experiments on several benchmarks demonstrate the effectiveness of our method in domain generalization and domain adaptation for gaze estimation tasks.

* 10 pages, 8 figures, submitted to CVPR 2023 
Viaarxiv icon

Rate-Distortion Optimized Post-Training Quantization for Learned Image Compression

Nov 05, 2022
Junqi Shi, Ming Lu, Fangdong Chen, Shiliang Pu, Zhan Ma

Figure 1 for Rate-Distortion Optimized Post-Training Quantization for Learned Image Compression
Figure 2 for Rate-Distortion Optimized Post-Training Quantization for Learned Image Compression
Figure 3 for Rate-Distortion Optimized Post-Training Quantization for Learned Image Compression
Figure 4 for Rate-Distortion Optimized Post-Training Quantization for Learned Image Compression

Quantizing floating-point neural network to its fixed-point representation is crucial for Learned Image Compression (LIC) because it ensures the decoding consistency for interoperability and reduces space-time complexity for implementation. Existing solutions often have to retrain the network for model quantization which is time consuming and impractical. This work suggests the use of Post-Training Quantization (PTQ) to directly process pretrained, off-the-shelf LIC models. We theoretically prove that minimizing the mean squared error (MSE) in PTQ is sub-optimal for compression task and thus develop a novel Rate-Distortion (R-D) Optimized PTQ (RDO-PTQ) to best retain the compression performance. Such RDO-PTQ just needs to compress few images (e.g., 10) to optimize the transformation of weight, bias, and activation of underlying LIC model from its native 32-bit floating-point (FP32) format to 8-bit fixed-point (INT8) precision for fixed-point inference onwards. Experiments reveal outstanding efficiency of the proposed method on different LICs, showing the closest coding performance to their floating-point counterparts. And, our method is a lightweight and plug-and-play approach without any need of model retraining which is attractive to practitioners.

Viaarxiv icon