Alert button
Picture for Yu-Xiong Wang

Yu-Xiong Wang

Alert button

Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models

Nov 02, 2023
Andy Zhou, Jindong Wang, Yu-Xiong Wang, Haohan Wang

We propose a conceptually simple and lightweight framework for improving the robustness of vision models through the combination of knowledge distillation and data augmentation. We address the conjecture that larger models do not make for better teachers by showing strong gains in out-of-distribution robustness when distilling from pretrained foundation models. Following this finding, we propose Discrete Adversarial Distillation (DAD), which leverages a robust teacher to generate adversarial examples and a VQGAN to discretize them, creating more informative samples than standard data augmentation techniques. We provide a theoretical framework for the use of a robust teacher in the knowledge distillation with data augmentation setting and demonstrate strong gains in out-of-distribution robustness and clean accuracy across different student architectures. Notably, our method adds minor computational overhead compared to similar techniques and can be easily combined with other data augmentations for further improvements.

* Published in NeurIPS 2023 
Viaarxiv icon

A Simple Solution for Offline Imitation from Observations and Examples with Possibly Incomplete Trajectories

Nov 02, 2023
Kai Yan, Alexander G. Schwing, Yu-Xiong Wang

Offline imitation from observations aims to solve MDPs where only task-specific expert states and task-agnostic non-expert state-action pairs are available. Offline imitation is useful in real-world scenarios where arbitrary interactions are costly and expert actions are unavailable. The state-of-the-art "DIstribution Correction Estimation" (DICE) methods minimize divergence of state occupancy between expert and learner policies and retrieve a policy with weighted behavior cloning; however, their results are unstable when learning from incomplete trajectories, due to a non-robust optimization in the dual domain. To address the issue, in this paper, we propose Trajectory-Aware Imitation Learning from Observations (TAILO). TAILO uses a discounted sum along the future trajectory as the weight for weighted behavior cloning. The terms for the sum are scaled by the output of a discriminator, which aims to identify expert states. Despite simplicity, TAILO works well if there exist trajectories or segments of expert behavior in the task-agnostic data, a common assumption in prior work. In experiments across multiple testbeds, we find TAILO to be more robust and effective, particularly with incomplete trajectories.

* 35 pages; Accepted as a poster for NeurIPS2023 
Viaarxiv icon

Frozen Transformers in Language Models Are Effective Visual Encoder Layers

Oct 19, 2023
Ziqi Pang, Ziyang Xie, Yunze Man, Yu-Xiong Wang

This paper reveals that large language models (LLMs), despite being trained solely on textual data, are surprisingly strong encoders for purely visual tasks in the absence of language. Even more intriguingly, this can be achieved by a simple yet previously overlooked strategy -- employing a frozen transformer block from pre-trained LLMs as a constituent encoder layer to directly process visual tokens. Our work pushes the boundaries of leveraging LLMs for computer vision tasks, significantly departing from conventional practices that typically necessitate a multi-modal vision-language setup with associated language prompts, inputs, or outputs. We demonstrate that our approach consistently enhances performance across a diverse range of tasks, encompassing pure 2D and 3D visual recognition tasks (e.g., image and point cloud classification), temporal modeling tasks (e.g., action recognition), non-semantic tasks (e.g., motion forecasting), and multi-modal tasks (e.g., 2D/3D visual question answering and image-text retrieval). Such improvements are a general phenomenon, applicable to various types of LLMs (e.g., LLaMA and OPT) and different LLM transformer blocks. We additionally propose the information filtering hypothesis to explain the effectiveness of pre-trained LLMs in visual encoding -- the pre-trained LLM transformer blocks discern informative visual tokens and further amplify their effect. This hypothesis is empirically supported by the observation that the feature activation, after training with LLM transformer blocks, exhibits a stronger focus on relevant regions. We hope that our work inspires new perspectives on utilizing LLMs and deepening our understanding of their underlying mechanisms. Code is available at https://github.com/ziqipang/LM4VisualEncoding.

* 23 pages, 13 figures. Code at https://github.com/ziqipang/LM4VisualEncoding 
Viaarxiv icon

Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models

Oct 06, 2023
Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, Yu-Xiong Wang

Figure 1 for Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Figure 2 for Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Figure 3 for Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Figure 4 for Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models

While large language models (LLMs) have demonstrated impressive performance on a range of decision-making tasks, they rely on simple acting processes and fall short of broad deployment as autonomous agents. We introduce LATS (Language Agent Tree Search), a general framework that synergizes the capabilities of LLMs in planning, acting, and reasoning. Drawing inspiration from Monte Carlo tree search in model-based reinforcement learning, LATS employs LLMs as agents, value functions, and optimizers, repurposing their latent strengths for enhanced decision-making. What is crucial in this method is the use of an environment for external feedback, which offers a more deliberate and adaptive problem-solving mechanism that moves beyond the limitations of existing techniques. Our experimental evaluation across diverse domains, such as programming, HotPotQA, and WebShop, illustrates the applicability of LATS for both reasoning and acting. In particular, LATS achieves 94.4\% for programming on HumanEval with GPT-4 and an average score of 75.9 for web browsing on WebShop with GPT-3.5, demonstrating the effectiveness and generality of our method.

* Website and code can be found at https://andyz245.github.io/LanguageAgentTreeSearch 
Viaarxiv icon

Streaming Motion Forecasting for Autonomous Driving

Oct 02, 2023
Ziqi Pang, Deva Ramanan, Mengtian Li, Yu-Xiong Wang

Figure 1 for Streaming Motion Forecasting for Autonomous Driving
Figure 2 for Streaming Motion Forecasting for Autonomous Driving
Figure 3 for Streaming Motion Forecasting for Autonomous Driving
Figure 4 for Streaming Motion Forecasting for Autonomous Driving

Trajectory forecasting is a widely-studied problem for autonomous navigation. However, existing benchmarks evaluate forecasting based on independent snapshots of trajectories, which are not representative of real-world applications that operate on a continuous stream of data. To bridge this gap, we introduce a benchmark that continuously queries future trajectories on streaming data and we refer to it as "streaming forecasting." Our benchmark inherently captures the disappearance and re-appearance of agents, presenting the emergent challenge of forecasting for occluded agents, which is a safety-critical problem yet overlooked by snapshot-based benchmarks. Moreover, forecasting in the context of continuous timestamps naturally asks for temporal coherence between predictions from adjacent timestamps. Based on this benchmark, we further provide solutions and analysis for streaming forecasting. We propose a plug-and-play meta-algorithm called "Predictive Streamer" that can adapt any snapshot-based forecaster into a streaming forecaster. Our algorithm estimates the states of occluded agents by propagating their positions with multi-modal trajectories, and leverages differentiable filters to ensure temporal consistency. Both occlusion reasoning and temporal coherence strategies significantly improve forecasting quality, resulting in 25% smaller endpoint errors for occluded agents and 10-20% smaller fluctuations of trajectories. Our work is intended to generate interest within the community by highlighting the importance of addressing motion forecasting in its intrinsic streaming setting. Code is available at https://github.com/ziqipang/StreamingForecasting.

* IROS 2023, 8 pages, 9 figures 
Viaarxiv icon

Multi-task View Synthesis with Neural Radiance Fields

Sep 29, 2023
Shuhong Zheng, Zhipeng Bao, Martial Hebert, Yu-Xiong Wang

Figure 1 for Multi-task View Synthesis with Neural Radiance Fields
Figure 2 for Multi-task View Synthesis with Neural Radiance Fields
Figure 3 for Multi-task View Synthesis with Neural Radiance Fields
Figure 4 for Multi-task View Synthesis with Neural Radiance Fields

Multi-task visual learning is a critical aspect of computer vision. Current research, however, predominantly concentrates on the multi-task dense prediction setting, which overlooks the intrinsic 3D world and its multi-view consistent structures, and lacks the capability for versatile imagination. In response to these limitations, we present a novel problem setting -- multi-task view synthesis (MTVS), which reinterprets multi-task prediction as a set of novel-view synthesis tasks for multiple scene properties, including RGB. To tackle the MTVS problem, we propose MuvieNeRF, a framework that incorporates both multi-task and cross-view knowledge to simultaneously synthesize multiple scene properties. MuvieNeRF integrates two key modules, the Cross-Task Attention (CTA) and Cross-View Attention (CVA) modules, enabling the efficient use of information across multiple views and tasks. Extensive evaluation on both synthetic and realistic benchmarks demonstrates that MuvieNeRF is capable of simultaneously synthesizing different scene properties with promising visual quality, even outperforming conventional discriminative models in various settings. Notably, we show that MuvieNeRF exhibits universal applicability across a range of NeRF backbones. Our code is available at https://github.com/zsh2000/MuvieNeRF.

* ICCV 2023, Website: https://zsh2000.github.io/mtvs.github.io/ 
Viaarxiv icon

Improving Equivariance in State-of-the-Art Supervised Depth and Normal Predictors

Sep 28, 2023
Yuanyi Zhong, Anand Bhattad, Yu-Xiong Wang, David Forsyth

Figure 1 for Improving Equivariance in State-of-the-Art Supervised Depth and Normal Predictors
Figure 2 for Improving Equivariance in State-of-the-Art Supervised Depth and Normal Predictors
Figure 3 for Improving Equivariance in State-of-the-Art Supervised Depth and Normal Predictors
Figure 4 for Improving Equivariance in State-of-the-Art Supervised Depth and Normal Predictors

Dense depth and surface normal predictors should possess the equivariant property to cropping-and-resizing -- cropping the input image should result in cropping the same output image. However, we find that state-of-the-art depth and normal predictors, despite having strong performances, surprisingly do not respect equivariance. The problem exists even when crop-and-resize data augmentation is employed during training. To remedy this, we propose an equivariant regularization technique, consisting of an averaging procedure and a self-consistency loss, to explicitly promote cropping-and-resizing equivariance in depth and normal networks. Our approach can be applied to both CNN and Transformer architectures, does not incur extra cost during testing, and notably improves the supervised and semi-supervised learning performance of dense predictors on Taskonomy tasks. Finally, finetuning with our loss on unlabeled images improves not only equivariance but also accuracy of state-of-the-art depth and normal predictors when evaluated on NYU-v2. GitHub link: https://github.com/mikuhatsune/equivariance

* ICCV 2023 
Viaarxiv icon

Aligning Large Multimodal Models with Factually Augmented RLHF

Sep 25, 2023
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell

Figure 1 for Aligning Large Multimodal Models with Factually Augmented RLHF
Figure 2 for Aligning Large Multimodal Models with Factually Augmented RLHF
Figure 3 for Aligning Large Multimodal Models with Factually Augmented RLHF
Figure 4 for Aligning Large Multimodal Models with Factually Augmented RLHF

Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.

* Preprint 
Viaarxiv icon

InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion

Aug 31, 2023
Sirui Xu, Zhengyuan Li, Yu-Xiong Wang, Liang-Yan Gui

Figure 1 for InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion
Figure 2 for InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion
Figure 3 for InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion
Figure 4 for InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion

This paper addresses a novel task of anticipating 3D human-object interactions (HOIs). Most existing research on HOI synthesis lacks comprehensive whole-body interactions with dynamic objects, e.g., often limited to manipulating small or static objects. Our task is significantly more challenging, as it requires modeling dynamic objects with various shapes, capturing whole-body motion, and ensuring physically valid interactions. To this end, we propose InterDiff, a framework comprising two key steps: (i) interaction diffusion, where we leverage a diffusion model to encode the distribution of future human-object interactions; (ii) interaction correction, where we introduce a physics-informed predictor to correct denoised HOIs in a diffusion step. Our key insight is to inject prior knowledge that the interactions under reference with respect to contact points follow a simple pattern and are easily predictable. Experiments on multiple human-object interaction datasets demonstrate the effectiveness of our method for this task, capable of producing realistic, vivid, and remarkably long-term 3D HOI predictions.

* ICCV 2023; Project Page: https://sirui-xu.github.io/InterDiff/ 
Viaarxiv icon

An Empirical Analysis of Range for 3D Object Detection

Aug 08, 2023
Neehar Peri, Mengtian Li, Benjamin Wilson, Yu-Xiong Wang, James Hays, Deva Ramanan

Figure 1 for An Empirical Analysis of Range for 3D Object Detection
Figure 2 for An Empirical Analysis of Range for 3D Object Detection
Figure 3 for An Empirical Analysis of Range for 3D Object Detection
Figure 4 for An Empirical Analysis of Range for 3D Object Detection

LiDAR-based 3D detection plays a vital role in autonomous navigation. Surprisingly, although autonomous vehicles (AVs) must detect both near-field objects (for collision avoidance) and far-field objects (for longer-term planning), contemporary benchmarks focus only on near-field 3D detection. However, AVs must detect far-field objects for safe navigation. In this paper, we present an empirical analysis of far-field 3D detection using the long-range detection dataset Argoverse 2.0 to better understand the problem, and share the following insight: near-field LiDAR measurements are dense and optimally encoded by small voxels, while far-field measurements are sparse and are better encoded with large voxels. We exploit this observation to build a collection of range experts tuned for near-vs-far field detection, and propose simple techniques to efficiently ensemble models for long-range detection that improve efficiency by 33% and boost accuracy by 3.2% CDS.

* Accepted to ICCV 2023 Workshop - Robustness and Reliability of Autonomous Vehicles in the Open-World 
Viaarxiv icon