Alert button
Picture for Hsin-Ping Huang

Hsin-Ping Huang

Alert button

Fine-grained Controllable Video Generation via Object Appearance and Context

Dec 05, 2023
Hsin-Ping Huang, Yu-Chuan Su, Deqing Sun, Lu Jiang, Xuhui Jia, Yukun Zhu, Ming-Hsuan Yang

Text-to-video generation has shown promising results. However, by taking only natural languages as input, users often face difficulties in providing detailed information to precisely control the model's output. In this work, we propose fine-grained controllable video generation (FACTOR) to achieve detailed control. Specifically, FACTOR aims to control objects' appearances and context, including their location and category, in conjunction with the text prompt. To achieve detailed control, we propose a unified framework to jointly inject control signals into the existing text-to-video model. Our model consists of a joint encoder and adaptive cross-attention layers. By optimizing the encoder and the inserted layer, we adapt the model to generate videos that are aligned with both text prompts and fine-grained control. Compared to existing methods relying on dense control signals such as edge maps, we provide a more intuitive and user-friendly interface to allow object-level fine-grained control. Our method achieves controllability of object appearances without finetuning, which reduces the per-subject optimization efforts for the users. Extensive experiments on standard benchmark datasets and user-provided inputs validate that our model obtains a 70% improvement in controllability metrics over competitive baselines.

* Project page: https://hhsinping.github.io/factor 
Viaarxiv icon

Video Generation Beyond a Single Clip

Apr 15, 2023
Hsin-Ping Huang, Yu-Chuan Su, Ming-Hsuan Yang

Figure 1 for Video Generation Beyond a Single Clip
Figure 2 for Video Generation Beyond a Single Clip
Figure 3 for Video Generation Beyond a Single Clip
Figure 4 for Video Generation Beyond a Single Clip

We tackle the long video generation problem, i.e.~generating videos beyond the output length of video generation models. Due to the computation resource constraints, video generation models can only generate video clips that are relatively short compared with the length of real videos. Existing works apply a sliding window approach to generate long videos at inference time, which is often limited to generating recurrent events or homogeneous content. To generate long videos covering diverse content and multiple events, we propose to use additional guidance to control the video generation process. We further present a two-stage approach to the problem, which allows us to utilize existing video generation models to generate high-quality videos within a small time window while modeling the video holistically based on the input guidance. The proposed approach is complementary to existing efforts on video generation, which focus on generating realistic video within a fixed time window. Extensive experiments on challenging real-world videos validate the benefit of the proposed method, which improves over state-of-the-art by up to 9.5% in objective metrics and is preferred by users more than 80% of time.

Viaarxiv icon

Self-supervised AutoFlow

Dec 08, 2022
Hsin-Ping Huang, Charles Herrmann, Junhwa Hur, Erika Lu, Kyle Sargent, Austin Stone, Ming-Hsuan Yang, Deqing Sun

Figure 1 for Self-supervised AutoFlow
Figure 2 for Self-supervised AutoFlow
Figure 3 for Self-supervised AutoFlow
Figure 4 for Self-supervised AutoFlow

Recently, AutoFlow has shown promising results on learning a training set for optical flow, but requires ground truth labels in the target domain to compute its search metric. Observing a strong correlation between the ground truth search metric and self-supervised losses, we introduce self-supervised AutoFlow to handle real-world videos without ground truth labels. Using self-supervised loss as the search metric, our self-supervised AutoFlow performs on par with AutoFlow on Sintel and KITTI where ground truth is available, and performs better on the real-world DAVIS dataset. We further explore using self-supervised AutoFlow in the (semi-)supervised setting and obtain competitive results against the state of the art.

Viaarxiv icon

Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing

Mar 23, 2022
Hsin-Ping Huang, Deqing Sun, Yaojie Liu, Wen-Sheng Chu, Taihong Xiao, Jinwei Yuan, Hartwig Adam, Ming-Hsuan Yang

Figure 1 for Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing
Figure 2 for Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing
Figure 3 for Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing
Figure 4 for Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing

While recent face anti-spoofing methods perform well under the intra-domain setups, an effective approach needs to account for much larger appearance variations of images acquired in complex scenes with different sensors for robust performance. In this paper, we present adaptive vision transformers (ViT) for robust cross-domain face anti-spoofing. Specifically, we adopt ViT as a backbone to exploit its strength to account for long-range dependencies among pixels. We further introduce the ensemble adapters module and feature-wise transformation layers in the ViT to adapt to different domains for robust performance with a few samples. Experiments on several benchmark datasets show that the proposed models achieve both robust and competitive performance against the state-of-the-art methods.

Viaarxiv icon

Learning to Stylize Novel Views

May 27, 2021
Hsin-Ping Huang, Hung-Yu Tseng, Saurabh Saini, Maneesh Singh, Ming-Hsuan Yang

Figure 1 for Learning to Stylize Novel Views
Figure 2 for Learning to Stylize Novel Views
Figure 3 for Learning to Stylize Novel Views
Figure 4 for Learning to Stylize Novel Views

We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views given a set of images of the same scene and a reference image of the desired style as inputs. Direct solution of combining novel view synthesis and stylization approaches lead to results that are blurry or not consistent across different views. We propose a point cloud-based method for consistent 3D scene stylization. First, we construct the point cloud by back-projecting the image features to the 3D space. Second, we develop point cloud aggregation modules to gather the style information of the 3D scene, and then modulate the features in the point cloud with a linear transformation matrix. Finally, we project the transformed features to 2D space to obtain the novel views. Experimental results on two diverse datasets of real-world scenes validate that our method generates consistent stylized novel view synthesis results against other alternative approaches.

* Project page: https://hhsinping.github.io/3d_scene_stylization/ 
Viaarxiv icon

Semantic View Synthesis

Aug 24, 2020
Hsin-Ping Huang, Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang

Figure 1 for Semantic View Synthesis
Figure 2 for Semantic View Synthesis
Figure 3 for Semantic View Synthesis
Figure 4 for Semantic View Synthesis

We tackle a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input. We build upon recent advances in semantic image synthesis and view synthesis for handling photographic image content generation and view extrapolation. Direct application of existing image/view synthesis methods, however, results in severe ghosting/blurry artifacts. To address the drawbacks, we propose a two-step approach. First, we focus on synthesizing the color and depth of the visible surface of the 3D scene. We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process. Our method produces sharp contents at the original view and geometrically consistent renderings across novel viewpoints. The experiments on numerous indoor and outdoor images show favorable results against several strong baselines and validate the effectiveness of our approach.

* ECCV 2020. Project: https://hhsinping.github.io/svs/index.html Colab: https://colab.research.google.com/drive/1iT5PfK7zl1quAOwC227GfBjieFMVHjI5 
Viaarxiv icon

Unsupervised Adversarial Domain Adaptation for Implicit Discourse Relation Classification

Apr 15, 2020
Hsin-Ping Huang, Junyi Jessy Li

Figure 1 for Unsupervised Adversarial Domain Adaptation for Implicit Discourse Relation Classification
Figure 2 for Unsupervised Adversarial Domain Adaptation for Implicit Discourse Relation Classification
Figure 3 for Unsupervised Adversarial Domain Adaptation for Implicit Discourse Relation Classification
Figure 4 for Unsupervised Adversarial Domain Adaptation for Implicit Discourse Relation Classification

Implicit discourse relations are not only more challenging to classify, but also to annotate, than their explicit counterparts. We tackle situations where training data for implicit relations are lacking, and exploit domain adaptation from explicit relations (Ji et al., 2015). We present an unsupervised adversarial domain adaptive network equipped with a reconstruction component. Our system outperforms prior works and other adversarial benchmarks for unsupervised domain adaptation. Additionally, we extend our system to take advantage of labeled data if some are available.

* CoNLL 2019 
Viaarxiv icon