Alert button
Picture for Leonid Sigal

Leonid Sigal

Alert button

Uncertainty Guided Adaptive Warping for Robust and Efficient Stereo Matching

Jul 26, 2023
Junpeng Jing, Jiankun Li, Pengfei Xiong, Jiangyu Liu, Shuaicheng Liu, Yichen Guo, Xin Deng, Mai Xu, Lai Jiang, Leonid Sigal

Figure 1 for Uncertainty Guided Adaptive Warping for Robust and Efficient Stereo Matching
Figure 2 for Uncertainty Guided Adaptive Warping for Robust and Efficient Stereo Matching
Figure 3 for Uncertainty Guided Adaptive Warping for Robust and Efficient Stereo Matching
Figure 4 for Uncertainty Guided Adaptive Warping for Robust and Efficient Stereo Matching

Correlation based stereo matching has achieved outstanding performance, which pursues cost volume between two feature maps. Unfortunately, current methods with a fixed model do not work uniformly well across various datasets, greatly limiting their real-world applicability. To tackle this issue, this paper proposes a new perspective to dynamically calculate correlation for robust stereo matching. A novel Uncertainty Guided Adaptive Correlation (UGAC) module is introduced to robustly adapt the same model for different scenarios. Specifically, a variance-based uncertainty estimation is employed to adaptively adjust the sampling area during warping operation. Additionally, we improve the traditional non-parametric warping with learnable parameters, such that the position-specific weights can be learned. We show that by empowering the recurrent network with the UGAC module, stereo matching can be exploited more robustly and effectively. Extensive experiments demonstrate that our method achieves state-of-the-art performance over the ETH3D, KITTI, and Middlebury datasets when employing the same fixed model over these datasets without any retraining procedure. To target real-time applications, we further design a lightweight model based on UGAC, which also outperforms other methods over KITTI benchmarks with only 0.6 M parameters.

* Accepted by ICCV2023 
Viaarxiv icon

INVE: Interactive Neural Video Editing

Jul 15, 2023
Jiahui Huang, Leonid Sigal, Kwang Moo Yi, Oliver Wang, Joon-Young Lee

Figure 1 for INVE: Interactive Neural Video Editing
Figure 2 for INVE: Interactive Neural Video Editing
Figure 3 for INVE: Interactive Neural Video Editing
Figure 4 for INVE: Interactive Neural Video Editing

We present Interactive Neural Video Editing (INVE), a real-time video editing solution, which can assist the video editing process by consistently propagating sparse frame edits to the entire video clip. Our method is inspired by the recent work on Layered Neural Atlas (LNA). LNA, however, suffers from two major drawbacks: (1) the method is too slow for interactive editing, and (2) it offers insufficient support for some editing use cases, including direct frame editing and rigid texture tracking. To address these challenges we leverage and adopt highly efficient network architectures, powered by hash-grids encoding, to substantially improve processing speed. In addition, we learn bi-directional functions between image-atlas and introduce vectorized editing, which collectively enables a much greater variety of edits in both the atlas and the frames directly. Compared to LNA, our INVE reduces the learning and inference time by a factor of 5, and supports various video editing operations that LNA cannot. We showcase the superiority of INVE over LNA in interactive video editing through a comprehensive quantitative and qualitative analysis, highlighting its numerous advantages and improved performance. For video results, please see https://gabriel-huang.github.io/inve/

Viaarxiv icon

Implicit and Explicit Commonsense for Multi-sentence Video Captioning

Mar 14, 2023
Shih-Han Chou, James J. Little, Leonid Sigal

Figure 1 for Implicit and Explicit Commonsense for Multi-sentence Video Captioning
Figure 2 for Implicit and Explicit Commonsense for Multi-sentence Video Captioning
Figure 3 for Implicit and Explicit Commonsense for Multi-sentence Video Captioning
Figure 4 for Implicit and Explicit Commonsense for Multi-sentence Video Captioning

Existing dense or paragraph video captioning approaches rely on holistic representations of videos, possibly coupled with learned object/action representations, to condition hierarchical language decoders. However, they fundamentally lack the commonsense knowledge of the world required to reason about progression of events, causality, and even function of certain objects within a scene. To address this limitation we propose a novel video captioning Transformer-based model, that takes into account both implicit (visuo-lingual and purely linguistic) and explicit (knowledge-base) commonsense knowledge. We show that these forms of knowledge, in isolation and in combination, enhance the quality of produced captions. Further, inspired by imitation learning, we propose a new task of instruction generation, where the goal is to produce a set of linguistic instructions from a video demonstration of its performance. We formalize the task using ALFRED dataset [52] generated using an AI2-THOR environment. While instruction generation is conceptually similar to paragraph captioning, it differs in the fact that it exhibits stronger object persistence, as well as spatially-aware and causal sentence structure. We show that our commonsense knowledge enhanced approach produces significant improvements on this task (up to 57% in METEOR and 8.5% in CIDEr), as well as the state-of-the-art result on more traditional video captioning in the ActivityNet Captions dataset [29].

Viaarxiv icon

MINOTAUR: Multi-task Video Grounding From Multimodal Queries

Feb 16, 2023
Raghav Goyal, Effrosyni Mavroudi, Xitong Yang, Sainbayar Sukhbaatar, Leonid Sigal, Matt Feiszli, Lorenzo Torresani, Du Tran

Figure 1 for MINOTAUR: Multi-task Video Grounding From Multimodal Queries
Figure 2 for MINOTAUR: Multi-task Video Grounding From Multimodal Queries
Figure 3 for MINOTAUR: Multi-task Video Grounding From Multimodal Queries
Figure 4 for MINOTAUR: Multi-task Video Grounding From Multimodal Queries

Video understanding tasks take many forms, from action detection to visual query localization and spatio-temporal grounding of sentences. These tasks differ in the type of inputs (only video, or video-query pair where query is an image region or sentence) and outputs (temporal segments or spatio-temporal tubes). However, at their core they require the same fundamental understanding of the video, i.e., the actors and objects in it, their actions and interactions. So far these tasks have been tackled in isolation with individual, highly specialized architectures, which do not exploit the interplay between tasks. In contrast, in this paper, we present a single, unified model for tackling query-based video understanding in long-form videos. In particular, our model can address all three tasks of the Ego4D Episodic Memory benchmark which entail queries of three different forms: given an egocentric video and a visual, textual or activity query, the goal is to determine when and where the answer can be seen within the video. Our model design is inspired by recent query-based approaches to spatio-temporal grounding, and contains modality-specific query encoders and task-specific sliding window inference that allow multi-task training with diverse input modalities and different structured outputs. We exhaustively analyze relationships among the tasks and illustrate that cross-task learning leads to improved performance on each individual task, as well as the ability to generalize to unseen tasks, such as zero-shot spatial localization of language queries.

Viaarxiv icon

Frustratingly Simple but Effective Zero-shot Detection and Segmentation: Analysis and a Strong Baseline

Feb 14, 2023
Siddhesh Khandelwal, Anirudth Nambirajan, Behjat Siddiquie, Jayan Eledath, Leonid Sigal

Figure 1 for Frustratingly Simple but Effective Zero-shot Detection and Segmentation: Analysis and a Strong Baseline
Figure 2 for Frustratingly Simple but Effective Zero-shot Detection and Segmentation: Analysis and a Strong Baseline
Figure 3 for Frustratingly Simple but Effective Zero-shot Detection and Segmentation: Analysis and a Strong Baseline
Figure 4 for Frustratingly Simple but Effective Zero-shot Detection and Segmentation: Analysis and a Strong Baseline

Methods for object detection and segmentation often require abundant instance-level annotations for training, which are time-consuming and expensive to collect. To address this, the task of zero-shot object detection (or segmentation) aims at learning effective methods for identifying and localizing object instances for the categories that have no supervision available. Constructing architectures for these tasks requires choosing from a myriad of design options, ranging from the form of the class encoding used to transfer information from seen to unseen categories, to the nature of the function being optimized for learning. In this work, we extensively study these design choices, and carefully construct a simple yet extremely effective zero-shot recognition method. Through extensive experiments on the MSCOCO dataset on object detection and segmentation, we highlight that our proposed method outperforms existing, considerably more complex, architectures. Our findings and method, which we propose as a competitive future baseline, point towards the need to revisit some of the recent design trends in zero-shot detection / segmentation.

* 17 Pages, 7 Figures 
Viaarxiv icon

Self-Supervised Relation Alignment for Scene Graph Generation

Feb 02, 2023
Bicheng Xu, Renjie Liao, Leonid Sigal

Figure 1 for Self-Supervised Relation Alignment for Scene Graph Generation
Figure 2 for Self-Supervised Relation Alignment for Scene Graph Generation
Figure 3 for Self-Supervised Relation Alignment for Scene Graph Generation
Figure 4 for Self-Supervised Relation Alignment for Scene Graph Generation

The goal of scene graph generation is to predict a graph from an input image, where nodes correspond to identified and localized objects and edges to their corresponding interaction predicates. Existing methods are trained in a fully supervised manner and focus on message passing mechanisms, loss functions, and/or bias mitigation. In this work we introduce a simple-yet-effective self-supervised relational alignment regularization designed to improve the scene graph generation performance. The proposed alignment is general and can be combined with any existing scene graph generation framework, where it is trained alongside the original model's objective. The alignment is achieved through distillation, where an auxiliary relation prediction branch, that mirrors and shares parameters with the supervised counterpart, is designed. In the auxiliary branch, relational input features are partially masked prior to message passing and predicate prediction. The predictions for masked relations are then aligned with the supervised counterparts after the message passing. We illustrate the effectiveness of this self-supervised relational alignment in conjunction with two scene graph generation architectures, SGTR and Neural Motifs, and show that in both cases we achieve significantly improved performance.

Viaarxiv icon

Vocabulary-informed Zero-shot and Open-set Learning

Jan 04, 2023
Yanwei Fu, Xiaomei Wang, Hanze Dong, Yu-Gang Jiang, Meng Wang, Xiangyang Xue, Leonid Sigal

Figure 1 for Vocabulary-informed Zero-shot and Open-set Learning
Figure 2 for Vocabulary-informed Zero-shot and Open-set Learning
Figure 3 for Vocabulary-informed Zero-shot and Open-set Learning
Figure 4 for Vocabulary-informed Zero-shot and Open-set Learning

Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.

* IEEE Transactions on Pattern Analysis and Machine Intelligence (2019)  
* 17 pages, 8 figures. TPAMI 2019 extended from CVPR 2016 (arXiv:1604.07093) 
Viaarxiv icon

Semantically Enhanced Global Reasoning for Semantic Segmentation

Dec 06, 2022
Mir Rayat Imtiaz Hossain, Leonid Sigal, James J. Little

Figure 1 for Semantically Enhanced Global Reasoning for Semantic Segmentation
Figure 2 for Semantically Enhanced Global Reasoning for Semantic Segmentation
Figure 3 for Semantically Enhanced Global Reasoning for Semantic Segmentation
Figure 4 for Semantically Enhanced Global Reasoning for Semantic Segmentation

Recent advances in pixel-level tasks (e.g., segmentation) illustrate the benefit of long-range interactions between aggregated region-based representations that can enhance local features. However, such pixel-to-region associations and the resulting representation, which often take the form of attention, cannot model the underlying semantic structure of the scene (e.g., individual objects and, by extension, their interactions). In this work, we take a step toward addressing this limitation. Specifically, we propose an architecture where we learn to project image features into latent region representations and perform global reasoning across them, using a transformer, to produce contextualized and scene-consistent representations that are then fused with original pixel-level features. Our design enables the latent regions to represent semantically meaningful concepts, by ensuring that activated regions are spatially disjoint and unions of such regions correspond to connected object segments. The resulting semantic global reasoning (SGR) is end-to-end trainable and can be combined with any semantic segmentation framework and backbone. Combining SGR with DeepLabV3 results in a semantic segmentation performance that is competitive to the state-of-the-art, while resulting in more semantically interpretable and diverse region representations, which we show can effectively transfer to detection and instance segmentation. Further, we propose a new metric that allows us to measure the semantics of representations at both the object class and instance level.

Viaarxiv icon

GraphPNAS: Learning Distribution of Good Neural Architectures via Deep Graph Generative Models

Nov 28, 2022
Muchen Li, Jeffrey Yunfan Liu, Leonid Sigal, Renjie Liao

Figure 1 for GraphPNAS: Learning Distribution of Good Neural Architectures via Deep Graph Generative Models
Figure 2 for GraphPNAS: Learning Distribution of Good Neural Architectures via Deep Graph Generative Models
Figure 3 for GraphPNAS: Learning Distribution of Good Neural Architectures via Deep Graph Generative Models
Figure 4 for GraphPNAS: Learning Distribution of Good Neural Architectures via Deep Graph Generative Models

Neural architectures can be naturally viewed as computational graphs. Motivated by this perspective, we, in this paper, study neural architecture search (NAS) through the lens of learning random graph models. In contrast to existing NAS methods which largely focus on searching for a single best architecture, i.e, point estimation, we propose GraphPNAS a deep graph generative model that learns a distribution of well-performing architectures. Relying on graph neural networks (GNNs), our GraphPNAS can better capture topologies of good neural architectures and relations between operators therein. Moreover, our graph generator leads to a learnable probabilistic search method that is more flexible and efficient than the commonly used RNN generator and random search methods. Finally, we learn our generator via an efficient reinforcement learning formulation for NAS. To assess the effectiveness of our GraphPNAS, we conduct extensive experiments on three search spaces, including the challenging RandWire on TinyImageNet, ENAS on CIFAR10, and NAS-Bench-101/201. The complexity of RandWire is significantly larger than other search spaces in the literature. We show that our proposed graph generator consistently outperforms RNN-based one and achieves better or comparable performances than state-of-the-art NAS methods.

Viaarxiv icon

Make-A-Story: Visual Memory Conditioned Consistent Story Generation

Nov 23, 2022
Tanzila Rahman, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, Shweta Mahajan, Leonid Sigal

Figure 1 for Make-A-Story: Visual Memory Conditioned Consistent Story Generation
Figure 2 for Make-A-Story: Visual Memory Conditioned Consistent Story Generation
Figure 3 for Make-A-Story: Visual Memory Conditioned Consistent Story Generation
Figure 4 for Make-A-Story: Visual Memory Conditioned Consistent Story Generation

There has been a recent explosion of impressive generative models that can produce high quality images (or videos) conditioned on text descriptions. However, all such approaches rely on conditional sentences that contain unambiguous descriptions of scenes and main actors in them. Therefore employing such models for more complex task of story visualization, where naturally references and co-references exist, and one requires to reason about when to maintain consistency of actors and backgrounds across frames/scenes, and when not to, based on story progression, remains a challenge. In this work, we address the aforementioned challenges and propose a novel autoregressive diffusion-based framework with a visual memory module that implicitly captures the actor and background context across the generated frames. Sentence-conditioned soft attention over the memories enables effective reference resolution and learns to maintain scene and actor consistency when needed. To validate the effectiveness of our approach, we extend the MUGEN dataset and introduce additional characters, backgrounds and referencing in multi-sentence storylines. Our experiments for story generation on the MUGEN and the FlintstonesSV dataset show that our method not only outperforms prior state-of-the-art in generating frames with high visual quality, which are consistent with the story, but also models appropriate correspondences between the characters and the background.

* 10 pages 
Viaarxiv icon