Alert button
Picture for Zenan Huang

Zenan Huang

Alert button

Latent Processes Identification From Multi-View Time Series

May 14, 2023
Zenan Huang, Haobo Wang, Junbo Zhao, Nenggan Zheng

Figure 1 for Latent Processes Identification From Multi-View Time Series
Figure 2 for Latent Processes Identification From Multi-View Time Series
Figure 3 for Latent Processes Identification From Multi-View Time Series
Figure 4 for Latent Processes Identification From Multi-View Time Series

Understanding the dynamics of time series data typically requires identifying the unique latent factors for data generation, \textit{a.k.a.}, latent processes identification. Driven by the independent assumption, existing works have made great progress in handling single-view data. However, it is a non-trivial problem that extends them to multi-view time series data because of two main challenges: (i) the complex data structure, such as temporal dependency, can result in violation of the independent assumption; (ii) the factors from different views are generally overlapped and are hard to be aggregated to a complete set. In this work, we propose a novel framework MuLTI that employs the contrastive learning technique to invert the data generative process for enhanced identifiability. Additionally, MuLTI integrates a permutation mechanism that merges corresponding overlapped variables by the establishment of an optimal transport formula. Extensive experimental results on synthetic and real-world datasets demonstrate the superiority of our method in recovering identifiable latent variables on multi-view time series.

* 15 pages, 9 figures, accepted by IJCAI-23 
Viaarxiv icon

Discriminative Radial Domain Adaptation

Jan 01, 2023
Zenan Huang, Jun Wen, Siheng Chen, Linchao Zhu, Nenggan Zheng

Figure 1 for Discriminative Radial Domain Adaptation
Figure 2 for Discriminative Radial Domain Adaptation
Figure 3 for Discriminative Radial Domain Adaptation
Figure 4 for Discriminative Radial Domain Adaptation

Domain adaptation methods reduce domain shift typically by learning domain-invariant features. Most existing methods are built on distribution matching, e.g., adversarial domain adaptation, which tends to corrupt feature discriminability. In this paper, we propose Discriminative Radial Domain Adaptation (DRDR) which bridges source and target domains via a shared radial structure. It's motivated by the observation that as the model is trained to be progressively discriminative, features of different categories expand outwards in different directions, forming a radial structure. We show that transferring such an inherently discriminative structure would enable to enhance feature transferability and discriminability simultaneously. Specifically, we represent each domain with a global anchor and each category a local anchor to form a radial structure and reduce domain shift via structure matching. It consists of two parts, namely isometric transformation to align the structure globally and local refinement to match each category. To enhance the discriminability of the structure, we further encourage samples to cluster close to the corresponding local anchors based on optimal-transport assignment. Extensively experimenting on multiple benchmarks, our method is shown to consistently outperforms state-of-the-art approaches on varied tasks, including the typical unsupervised domain adaptation, multi-source domain adaptation, domain-agnostic learning, and domain generalization.

Viaarxiv icon

Interventional Domain Adaptation

Nov 07, 2020
Jun Wen, Changjian Shui, Kun Kuang, Junsong Yuan, Zenan Huang, Zhefeng Gong, Nenggan Zheng

Figure 1 for Interventional Domain Adaptation
Figure 2 for Interventional Domain Adaptation
Figure 3 for Interventional Domain Adaptation
Figure 4 for Interventional Domain Adaptation

Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain. Most of DA methods focus on enhancing feature transferability through domain-invariance learning. However, source-learned discriminability itself might be tailored to be biased and unsafely transferable by spurious correlations, \emph{i.e.}, part of source-specific features are correlated with category labels. We find that standard domain-invariance learning suffers from such correlations and incorrectly transfers the source-specifics. To address this issue, we intervene in the learning of feature discriminability using unlabeled target data to guide it to get rid of the domain-specific part and be safely transferable. Concretely, we generate counterfactual features that distinguish the domain-specifics from domain-sharable part through a novel feature intervention strategy. To prevent the residence of domain-specifics, the feature discriminability is trained to be invariant to the mutations in the domain-specifics of counterfactual features. Experimenting on typical \emph{one-to-one} unsupervised domain adaptation and challenging domain-agnostic adaptation tasks, the consistent performance improvements of our method over state-of-the-art approaches validate that the learned discriminative features are more safely transferable and generalize well to novel domains.

Viaarxiv icon