Alert button
Picture for Zehao Xiao

Zehao Xiao

Alert button

Learning Variational Neighbor Labels for Test-Time Domain Generalization

Jul 08, 2023
Sameer Ambekar, Zehao Xiao, Jiayi Shen, Xiantong Zhen, Cees G. M. Snoek

Figure 1 for Learning Variational Neighbor Labels for Test-Time Domain Generalization
Figure 2 for Learning Variational Neighbor Labels for Test-Time Domain Generalization
Figure 3 for Learning Variational Neighbor Labels for Test-Time Domain Generalization
Figure 4 for Learning Variational Neighbor Labels for Test-Time Domain Generalization

This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed at unseen target domains. We follow the strict separation of source training and target testing but exploit the value of the unlabeled target data itself during inference. We make three contributions. First, we propose probabilistic pseudo-labeling of target samples to generalize the source-trained model to the target domain at test time. We formulate the generalization at test time as a variational inference problem by modeling pseudo labels as distributions to consider the uncertainty during generalization and alleviate the misleading signal of inaccurate pseudo labels. Second, we learn variational neighbor labels that incorporate the information of neighboring target samples to generate more robust pseudo labels. Third, to learn the ability to incorporate more representative target information and generate more precise and robust variational neighbor labels, we introduce a meta-generalization stage during training to simulate the generalization procedure. Experiments on six widely-used datasets demonstrate the benefits, abilities, and effectiveness of our proposal.

* Under review 
Viaarxiv icon

ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion

Jun 26, 2023
Yingjun Du, Zehao Xiao, Shengcai Liao, Cees Snoek

Prototype-based meta-learning has emerged as a powerful technique for addressing few-shot learning challenges. However, estimating a deterministic prototype using a simple average function from a limited number of examples remains a fragile process. To overcome this limitation, we introduce ProtoDiff, a novel framework that leverages a task-guided diffusion model during the meta-training phase to gradually generate prototypes, thereby providing efficient class representations. Specifically, a set of prototypes is optimized to achieve per-task prototype overfitting, enabling accurately obtaining the overfitted prototypes for individual tasks. Furthermore, we introduce a task-guided diffusion process within the prototype space, enabling the meta-learning of a generative process that transitions from a vanilla prototype to an overfitted prototype. ProtoDiff gradually generates task-specific prototypes from random noise during the meta-test stage, conditioned on the limited samples available for the new task. Furthermore, to expedite training and enhance ProtoDiff's performance, we propose the utilization of residual prototype learning, which leverages the sparsity of the residual prototype. We conduct thorough ablation studies to demonstrate its ability to accurately capture the underlying prototype distribution and enhance generalization. The new state-of-the-art performance on within-domain, cross-domain, and few-task few-shot classification further substantiates the benefit of ProtoDiff.

* Under review 
Viaarxiv icon

Energy-Based Test Sample Adaptation for Domain Generalization

Feb 22, 2023
Zehao Xiao, Xiantong Zhen, Shengcai Liao, Cees G. M. Snoek

Figure 1 for Energy-Based Test Sample Adaptation for Domain Generalization
Figure 2 for Energy-Based Test Sample Adaptation for Domain Generalization
Figure 3 for Energy-Based Test Sample Adaptation for Domain Generalization
Figure 4 for Energy-Based Test Sample Adaptation for Domain Generalization

In this paper, we propose energy-based sample adaptation at test time for domain generalization. Where previous works adapt their models to target domains, we adapt the unseen target samples to source-trained models. To this end, we design a discriminative energy-based model, which is trained on source domains to jointly model the conditional distribution for classification and data distribution for sample adaptation. The model is optimized to simultaneously learn a classifier and an energy function. To adapt target samples to source distributions, we iteratively update the samples by energy minimization with stochastic gradient Langevin dynamics. Moreover, to preserve the categorical information in the sample during adaptation, we introduce a categorical latent variable into the energy-based model. The latent variable is learned from the original sample before adaptation by variational inference and fixed as a condition to guide the sample update. Experiments on six benchmarks for classification of images and microblog threads demonstrate the effectiveness of our proposal.

* Accepted by ICLR 2023 
Viaarxiv icon

Association Graph Learning for Multi-Task Classification with Category Shifts

Oct 10, 2022
Jiayi Shen, Zehao Xiao, Xiantong Zhen, Cees G. M. Snoek, Marcel Worring

Figure 1 for Association Graph Learning for Multi-Task Classification with Category Shifts
Figure 2 for Association Graph Learning for Multi-Task Classification with Category Shifts
Figure 3 for Association Graph Learning for Multi-Task Classification with Category Shifts
Figure 4 for Association Graph Learning for Multi-Task Classification with Category Shifts

In this paper, we focus on multi-task classification, where related classification tasks share the same label space and are learned simultaneously. In particular, we tackle a new setting, which is more realistic than currently addressed in the literature, where categories shift from training to test data. Hence, individual tasks do not contain complete training data for the categories in the test set. To generalize to such test data, it is crucial for individual tasks to leverage knowledge from related tasks. To this end, we propose learning an association graph to transfer knowledge among tasks for missing classes. We construct the association graph with nodes representing tasks, classes and instances, and encode the relationships among the nodes in the edges to guide their mutual knowledge transfer. By message passing on the association graph, our model enhances the categorical information of each instance, making it more discriminative. To avoid spurious correlations between task and class nodes in the graph, we introduce an assignment entropy maximization that encourages each class node to balance its edge weights. This enables all tasks to fully utilize the categorical information from related tasks. An extensive evaluation on three general benchmarks and a medical dataset for skin lesion classification reveals that our method consistently performs better than representative baselines.

Viaarxiv icon

Learning to Generalize across Domains on Single Test Samples

Feb 16, 2022
Zehao Xiao, Xiantong Zhen, Ling Shao, Cees G. M. Snoek

Figure 1 for Learning to Generalize across Domains on Single Test Samples
Figure 2 for Learning to Generalize across Domains on Single Test Samples
Figure 3 for Learning to Generalize across Domains on Single Test Samples
Figure 4 for Learning to Generalize across Domains on Single Test Samples

We strive to learn a model from a set of source domains that generalizes well to unseen target domains. The main challenge in such a domain generalization scenario is the unavailability of any target domain data during training, resulting in the learned model not being explicitly adapted to the unseen target domains. We propose learning to generalize across domains on single test samples. We leverage a meta-learning paradigm to learn our model to acquire the ability of adaptation with single samples at training time so as to further adapt itself to each single test sample at test time. We formulate the adaptation to the single test sample as a variational Bayesian inference problem, which incorporates the test sample as a conditional into the generation of model parameters. The adaptation to each test sample requires only one feed-forward computation at test time without any fine-tuning or self-supervised training on additional data from the unseen domains. Extensive ablation studies demonstrate that our model learns the ability to adapt models to each single sample by mimicking domain shifts during training. Further, our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.

Viaarxiv icon

A Bit More Bayesian: Domain-Invariant Learning with Uncertainty

May 09, 2021
Zehao Xiao, Jiayi Shen, Xiantong Zhen, Ling Shao, Cees G. M. Snoek

Figure 1 for A Bit More Bayesian: Domain-Invariant Learning with Uncertainty
Figure 2 for A Bit More Bayesian: Domain-Invariant Learning with Uncertainty
Figure 3 for A Bit More Bayesian: Domain-Invariant Learning with Uncertainty
Figure 4 for A Bit More Bayesian: Domain-Invariant Learning with Uncertainty

Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data. In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference, by incorporating uncertainty into neural network weights. We couple domain invariance in a probabilistic formula with the variational Bayesian inference. This enables us to explore domain-invariant learning in a principled way. Specifically, we derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network. We empirically demonstrate the effectiveness of our proposal on four widely used cross-domain visual recognition benchmarks. Ablation studies validate the synergistic benefits of our Bayesian treatment when jointly learning domain-invariant representations and classifiers for domain generalization. Further, our method consistently delivers state-of-the-art mean accuracy on all benchmarks.

* accepted to ICML 2021 
Viaarxiv icon

Crowd Counting and Density Estimation by Trellis Encoder-Decoder Network

Mar 03, 2019
Xiaolong Jiang, Zehao Xiao, Baochang Zhang, Xiantong Zhen, Xianbin Cao, David Doermann, Ling Shao

Figure 1 for Crowd Counting and Density Estimation by Trellis Encoder-Decoder Network
Figure 2 for Crowd Counting and Density Estimation by Trellis Encoder-Decoder Network
Figure 3 for Crowd Counting and Density Estimation by Trellis Encoder-Decoder Network
Figure 4 for Crowd Counting and Density Estimation by Trellis Encoder-Decoder Network

Crowd counting has recently attracted increasing interest in computer vision but remains a challenging problem. In this paper, we propose a trellis encoder-decoder network (TEDnet) for crowd counting, which focuses on generating high-quality density estimation maps. The major contributions are four-fold. First, we develop a new trellis architecture that incorporates multiple decoding paths to hierarchically aggregate features at different encoding stages, which can handle large variations of objects. Second, we design dense skip connections interleaved across paths to facilitate sufficient multi-scale feature fusions and to absorb the supervision information. Third, we propose a new combinatorial loss to enforce local coherence and spatial correlation in density maps. By distributedly imposing this combinatorial loss on intermediate outputs, gradient vanishing can be largely alleviated for better back-propagation and faster convergence. Finally, our TEDnet achieves new state-of-the art performance on four benchmarks, with an improvement up to 14% in terms of MAE.

Viaarxiv icon

In Defense of Single-column Networks for Crowd Counting

Aug 18, 2018
Ze Wang, Zehao Xiao, Kai Xie, Qiang Qiu, Xiantong Zhen, Xianbin Cao

Figure 1 for In Defense of Single-column Networks for Crowd Counting
Figure 2 for In Defense of Single-column Networks for Crowd Counting
Figure 3 for In Defense of Single-column Networks for Crowd Counting
Figure 4 for In Defense of Single-column Networks for Crowd Counting

Crowd counting usually addressed by density estimation becomes an increasingly important topic in computer vision due to its widespread applications in video surveillance, urban planning, and intelligence gathering. However, it is essentially a challenging task because of the greatly varied sizes of objects, coupled with severe occlusions and vague appearance of extremely small individuals. Existing methods heavily rely on multi-column learning architectures to extract multi-scale features, which however suffer from heavy computational cost, especially undesired for crowd counting. In this paper, we propose the single-column counting network (SCNet) for efficient crowd counting without relying on multi-column networks. SCNet consists of residual fusion modules (RFMs) for multi-scale feature extraction, a pyramid pooling module (PPM) for information fusion, and a sub-pixel convolutional module (SPCM) followed by a bilinear upsampling layer for resolution recovery. Those proposed modules enable our SCNet to fully capture multi-scale features in a compact single-column architecture and estimate high-resolution density map in an efficient way. In addition, we provide a principled paradigm for density map generation and data augmentation for training, which shows further improved performance. Extensive experiments on three benchmark datasets show that our SCNet delivers new state-of-the-art performance and surpasses previous methods by large margins, which demonstrates the great effectiveness of SCNet as a single-column network for crowd counting.

Viaarxiv icon