Alert button
Picture for Haoliang Wang

Haoliang Wang

Alert button

VaQuitA: Enhancing Alignment in LLM-Assisted Video Understanding

Dec 04, 2023
Yizhou Wang, Ruiyi Zhang, Haoliang Wang, Uttaran Bhattacharya, Yun Fu, Gang Wu

Recent advancements in language-model-based video understanding have been progressing at a remarkable pace, spurred by the introduction of Large Language Models (LLMs). However, the focus of prior research has been predominantly on devising a projection layer that maps video features to tokens, an approach that is both rudimentary and inefficient. In our study, we introduce a cutting-edge framework, VaQuitA, designed to refine the synergy between video and textual information. At the data level, instead of sampling frames uniformly, we implement a sampling method guided by CLIP-score rankings, which enables a more aligned selection of frames with the given question. At the feature level, we integrate a trainable Video Perceiver alongside a Visual-Query Transformer (abbreviated as VQ-Former), which bolsters the interplay between the input question and the video features. We also discover that incorporating a simple prompt, "Please be critical", into the LLM input can substantially enhance its video comprehension capabilities. Our experimental results indicate that VaQuitA consistently sets a new benchmark for zero-shot video question-answering tasks and is adept at producing high-quality, multi-turn video dialogues with users.

Viaarxiv icon

Fairness-Aware Domain Generalization under Covariate and Dependence Shifts

Nov 23, 2023
Chen Zhao, Kai Jiang, Xintao Wu, Haoliang Wang, Latifur Khan, Christan Grant, Feng Chen

Achieving the generalization of an invariant classifier from source domains to shifted target domains while simultaneously considering model fairness is a substantial and complex challenge in machine learning. Existing domain generalization research typically attributes domain shifts to concept shift, which relates to alterations in class labels, and covariate shift, which pertains to variations in data styles. In this paper, by introducing another form of distribution shift, known as dependence shift, which involves variations in fair dependence patterns across domains, we propose a novel domain generalization approach that addresses domain shifts by considering both covariate and dependence shifts. We assert the existence of an underlying transformation model can transform data from one domain to another. By generating data in synthetic domains through the model, a fairness-aware invariant classifier is learned that enforces both model accuracy and fairness in unseen domains. Extensive empirical studies on four benchmark datasets demonstrate that our approach surpasses state-of-the-art methods.

Viaarxiv icon

Towards Effective Semantic OOD Detection in Unseen Domains: A Domain Generalization Perspective

Sep 18, 2023
Haoliang Wang, Chen Zhao, Yunhui Guo, Kai Jiang, Feng Chen

Figure 1 for Towards Effective Semantic OOD Detection in Unseen Domains: A Domain Generalization Perspective
Figure 2 for Towards Effective Semantic OOD Detection in Unseen Domains: A Domain Generalization Perspective
Figure 3 for Towards Effective Semantic OOD Detection in Unseen Domains: A Domain Generalization Perspective
Figure 4 for Towards Effective Semantic OOD Detection in Unseen Domains: A Domain Generalization Perspective

Two prevalent types of distributional shifts in machine learning are the covariate shift (as observed across different domains) and the semantic shift (as seen across different classes). Traditional OOD detection techniques typically address only one of these shifts. However, real-world testing environments often present a combination of both covariate and semantic shifts. In this study, we introduce a novel problem, semantic OOD detection across domains, which simultaneously addresses both distributional shifts. To this end, we introduce two regularization strategies: domain generalization regularization, which ensures semantic invariance across domains to counteract the covariate shift, and OOD detection regularization, designed to enhance OOD detection capabilities against the semantic shift through energy bounding. Through rigorous testing on three standard domain generalization benchmarks, our proposed framework showcases its superiority over conventional domain generalization approaches in terms of OOD detection performance. Moreover, it holds its ground by maintaining comparable InD classification accuracy.

Viaarxiv icon

Measuring and Modeling Physical Intrinsic Motivation

May 24, 2023
Julio Martinez, Felix Binder, Haoliang Wang, Nicker Haber, Judith Fan, Daniel L. K. Yamins

Figure 1 for Measuring and Modeling Physical Intrinsic Motivation
Figure 2 for Measuring and Modeling Physical Intrinsic Motivation
Figure 3 for Measuring and Modeling Physical Intrinsic Motivation
Figure 4 for Measuring and Modeling Physical Intrinsic Motivation

Humans are interactive agents driven to seek out situations with interesting physical dynamics. Here we formalize the functional form of physical intrinsic motivation. We first collect ratings of how interesting humans find a variety of physics scenarios. We then model human interestingness responses by implementing various hypotheses of intrinsic motivation including models that rely on simple scene features to models that depend on forward physics prediction. We find that the single best predictor of human responses is adversarial reward, a model derived from physical prediction loss. We also find that simple scene feature models do not generalize their prediction of human responses across all scenarios. Finally, linearly combining the adversarial model with the number of collisions in a scene leads to the greatest improvement in predictivity of human responses, suggesting humans are driven towards scenarios that result in high information gain and physical activity.

* 6 pages, 5 figures, accepted to CogSci 2023 with full paper publication in the proceedings 
Viaarxiv icon

Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer

May 20, 2023
Kaige Xie, Tong Yu, Haoliang Wang, Junda Wu, Handong Zhao, Ruiyi Zhang, Kanak Mahadik, Ani Nenkova, Mark Riedl

Figure 1 for Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer
Figure 2 for Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer
Figure 3 for Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer
Figure 4 for Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer

In real-world scenarios, labeled samples for dialogue summarization are usually limited (i.e., few-shot) due to high annotation costs for high-quality dialogue summaries. To efficiently learn from few-shot samples, previous works have utilized massive annotated data from other downstream tasks and then performed prompt transfer in prompt tuning so as to enable cross-task knowledge transfer. However, existing general-purpose prompt transfer techniques lack consideration for dialogue-specific information. In this paper, we focus on improving the prompt transfer from dialogue state tracking to dialogue summarization and propose Skeleton-Assisted Prompt Transfer (SAPT), which leverages skeleton generation as extra supervision that functions as a medium connecting the distinct source and target task and resulting in the model's better consumption of dialogue state information. To automatically extract dialogue skeletons as supervised training data for skeleton generation, we design a novel approach with perturbation-based probes requiring neither annotation effort nor domain knowledge. Training the model on such skeletons can also help preserve model capability during prompt transfer. Our method significantly outperforms existing baselines. In-depth analyses demonstrate the effectiveness of our method in facilitating cross-task knowledge transfer in few-shot dialogue summarization.

Viaarxiv icon

Layer Adaptive Deep Neural Networks for Out-of-distribution Detection

Mar 01, 2022
Haoliang Wang, Chen Zhao, Xujiang Zhao, Feng Chen

Figure 1 for Layer Adaptive Deep Neural Networks for Out-of-distribution Detection
Figure 2 for Layer Adaptive Deep Neural Networks for Out-of-distribution Detection
Figure 3 for Layer Adaptive Deep Neural Networks for Out-of-distribution Detection
Figure 4 for Layer Adaptive Deep Neural Networks for Out-of-distribution Detection

During the forward pass of Deep Neural Networks (DNNs), inputs gradually transformed from low-level features to high-level conceptual labels. While features at different layers could summarize the important factors of the inputs at varying levels, modern out-of-distribution (OOD) detection methods mostly focus on utilizing their ending layer features. In this paper, we proposed a novel layer-adaptive OOD detection framework (LA-OOD) for DNNs that can fully utilize the intermediate layers' outputs. Specifically, instead of training a unified OOD detector at a fixed ending layer, we train multiple One-Class SVM OOD detectors simultaneously at the intermediate layers to exploit the full spectrum characteristics encoded at varying depths of DNNs. We develop a simple yet effective layer-adaptive policy to identify the best layer for detecting each potential OOD example. LA-OOD can be applied to any existing DNNs and does not require access to OOD samples during the training. Using three DNNs of varying depth and architectures, our experiments demonstrate that LA-OOD is robust against OODs of varying complexity and can outperform state-of-the-art competitors by a large margin on some real-world datasets.

* accepted in PAKDD 2022 
Viaarxiv icon

Learning to communicate about shared procedural abstractions

Jun 30, 2021
William P. McCarthy, Robert D. Hawkins, Haoliang Wang, Cameron Holdaway, Judith E. Fan

Figure 1 for Learning to communicate about shared procedural abstractions
Figure 2 for Learning to communicate about shared procedural abstractions
Figure 3 for Learning to communicate about shared procedural abstractions
Figure 4 for Learning to communicate about shared procedural abstractions

Many real-world tasks require agents to coordinate their behavior to achieve shared goals. Successful collaboration requires not only adopting the same communicative conventions, but also grounding these conventions in the same task-appropriate conceptual abstractions. We investigate how humans use natural language to collaboratively solve physical assembly problems more effectively over time. Human participants were paired up in an online environment to reconstruct scenes containing two block towers. One participant could see the target towers, and sent assembly instructions for the other participant to reconstruct. Participants provided increasingly concise instructions across repeated attempts on each pair of towers, using higher-level referring expressions that captured each scene's hierarchical structure. To explain these findings, we extend recent probabilistic models of ad-hoc convention formation with an explicit perceptual learning mechanism. These results shed light on the inductive biases that enable intelligent agents to coordinate upon shared procedural abstractions.

Viaarxiv icon