Alert button
Picture for Tsung-Yi Lin

Tsung-Yi Lin

Alert button

ATT3D: Amortized Text-to-3D Object Synthesis

Jun 06, 2023
Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, Chen-Hsuan Lin, Towaki Takikawa, Nicholas Sharp, Tsung-Yi Lin, Ming-Yu Liu, Sanja Fidler, James Lucas

Figure 1 for ATT3D: Amortized Text-to-3D Object Synthesis
Figure 2 for ATT3D: Amortized Text-to-3D Object Synthesis
Figure 3 for ATT3D: Amortized Text-to-3D Object Synthesis
Figure 4 for ATT3D: Amortized Text-to-3D Object Synthesis

Text-to-3D modelling has seen exciting progress by combining generative text-to-image models with image-to-3D methods like Neural Radiance Fields. DreamFusion recently achieved high-quality results but requires a lengthy, per-prompt optimization to create 3D objects. To address this, we amortize optimization over text prompts by training on many prompts simultaneously with a unified model, instead of separately. With this, we share computation across a prompt set, training in less time than per-prompt optimization. Our framework - Amortized text-to-3D (ATT3D) - enables knowledge-sharing between prompts to generalize to unseen setups and smooth interpolations between text for novel assets and simple animations.

* 22 pages, 20 figures 
Viaarxiv icon

Motion-Conditioned Diffusion Model for Controllable Video Synthesis

Apr 27, 2023
Tsai-Shien Chen, Chieh Hubert Lin, Hung-Yu Tseng, Tsung-Yi Lin, Ming-Hsuan Yang

Figure 1 for Motion-Conditioned Diffusion Model for Controllable Video Synthesis
Figure 2 for Motion-Conditioned Diffusion Model for Controllable Video Synthesis
Figure 3 for Motion-Conditioned Diffusion Model for Controllable Video Synthesis
Figure 4 for Motion-Conditioned Diffusion Model for Controllable Video Synthesis

Recent advancements in diffusion models have greatly improved the quality and diversity of synthesized content. To harness the expressive power of diffusion models, researchers have explored various controllable mechanisms that allow users to intuitively guide the content synthesis process. Although the latest efforts have primarily focused on video synthesis, there has been a lack of effective methods for controlling and describing desired content and motion. In response to this gap, we introduce MCDiff, a conditional diffusion model that generates a video from a starting image frame and a set of strokes, which allow users to specify the intended content and dynamics for synthesis. To tackle the ambiguity of sparse motion inputs and achieve better synthesis quality, MCDiff first utilizes a flow completion model to predict the dense video motion based on the semantic understanding of the video frame and the sparse motion control. Then, the diffusion model synthesizes high-quality future frames to form the output video. We qualitatively and quantitatively show that MCDiff achieves the state-the-of-art visual quality in stroke-guided controllable video synthesis. Additional experiments on MPII Human Pose further exhibit the capability of our model on diverse content and motion synthesis.

* Project page: https://tsaishien-chen.github.io/MCDiff/ 
Viaarxiv icon

Magic3D: High-Resolution Text-to-3D Content Creation

Nov 18, 2022
Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, Tsung-Yi Lin

Figure 1 for Magic3D: High-Resolution Text-to-3D Content Creation
Figure 2 for Magic3D: High-Resolution Text-to-3D Content Creation
Figure 3 for Magic3D: High-Resolution Text-to-3D Content Creation
Figure 4 for Magic3D: High-Resolution Text-to-3D Content Creation

DreamFusion has recently demonstrated the utility of a pre-trained text-to-image diffusion model to optimize Neural Radiance Fields (NeRF), achieving remarkable text-to-3D synthesis results. However, the method has two inherent limitations: (a) extremely slow optimization of NeRF and (b) low-resolution image space supervision on NeRF, leading to low-quality 3D models with a long processing time. In this paper, we address these limitations by utilizing a two-stage optimization framework. First, we obtain a coarse model using a low-resolution diffusion prior and accelerate with a sparse 3D hash grid structure. Using the coarse representation as the initialization, we further optimize a textured 3D mesh model with an efficient differentiable renderer interacting with a high-resolution latent diffusion model. Our method, dubbed Magic3D, can create high quality 3D mesh models in 40 minutes, which is 2x faster than DreamFusion (reportedly taking 1.5 hours on average), while also achieving higher resolution. User studies show 61.7% raters to prefer our approach over DreamFusion. Together with the image-conditioned generation capabilities, we provide users with new ways to control 3D synthesis, opening up new avenues to various creative applications.

* Project website: https://deepimagination.cc/Magic3D 
Viaarxiv icon

Optimizing Anchor-based Detectors for Autonomous Driving Scenes

Aug 11, 2022
Xianzhi Du, Wei-Chih Hung, Tsung-Yi Lin

Figure 1 for Optimizing Anchor-based Detectors for Autonomous Driving Scenes
Figure 2 for Optimizing Anchor-based Detectors for Autonomous Driving Scenes
Figure 3 for Optimizing Anchor-based Detectors for Autonomous Driving Scenes
Figure 4 for Optimizing Anchor-based Detectors for Autonomous Driving Scenes

This paper summarizes model improvements and inference-time optimizations for the popular anchor-based detectors in the scenes of autonomous driving. Based on the high-performing RCNN-RS and RetinaNet-RS detection frameworks designed for common detection scenes, we study a set of framework improvements to adapt the detectors to better detect small objects in crowd scenes. Then, we propose a model scaling strategy by scaling input resolution and model size to achieve a better speed-accuracy trade-off curve. We evaluate our family of models on the real-time 2D detection track of the Waymo Open Dataset (WOD). Within the 70 ms/frame latency constraint on a V100 GPU, our largest Cascade RCNN-RS model achieves 76.9% AP/L1 and 70.1% AP/L2, attaining the new state-of-the-art on WOD real-time 2D detection. Our fastest RetinaNet-RS model achieves 6.3 ms/frame while maintaining a reasonable detection precision at 50.7% AP/L1 and 42.9% AP/L2.

Viaarxiv icon

Vision Transformer for NeRF-Based View Synthesis from a Single Input Image

Jul 12, 2022
Kai-En Lin, Lin Yen-Chen, Wei-Sheng Lai, Tsung-Yi Lin, Yi-Chang Shih, Ravi Ramamoorthi

Figure 1 for Vision Transformer for NeRF-Based View Synthesis from a Single Input Image
Figure 2 for Vision Transformer for NeRF-Based View Synthesis from a Single Input Image
Figure 3 for Vision Transformer for NeRF-Based View Synthesis from a Single Input Image
Figure 4 for Vision Transformer for NeRF-Based View Synthesis from a Single Input Image

Although neural radiance fields (NeRF) have shown impressive advances for novel view synthesis, most methods typically require multiple input images of the same scene with accurate camera poses. In this work, we seek to substantially reduce the inputs to a single unposed image. Existing approaches condition on local image features to reconstruct a 3D object, but often render blurry predictions at viewpoints that are far away from the source view. To address this issue, we propose to leverage both the global and local features to form an expressive 3D representation. The global features are learned from a vision transformer, while the local features are extracted from a 2D convolutional network. To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering. This novel 3D representation allows the network to reconstruct unseen regions without enforcing constraints like symmetry or canonical coordinate systems. Our method can render novel views from only a single input image and generalize across multiple object categories using a single model. Quantitative and qualitative evaluations demonstrate that the proposed method achieves state-of-the-art performance and renders richer details than existing approaches.

* Project website: https://cseweb.ucsd.edu/~viscomp/projects/VisionNeRF/ 
Viaarxiv icon

A Unified Sequence Interface for Vision Tasks

Jun 15, 2022
Ting Chen, Saurabh Saxena, Lala Li, Tsung-Yi Lin, David J. Fleet, Geoffrey Hinton

Figure 1 for A Unified Sequence Interface for Vision Tasks
Figure 2 for A Unified Sequence Interface for Vision Tasks
Figure 3 for A Unified Sequence Interface for Vision Tasks
Figure 4 for A Unified Sequence Interface for Vision Tasks

While language tasks are naturally expressed in a single, unified, modeling framework, i.e., generating sequences of tokens, this has not been the case in computer vision. As a result, there is a proliferation of distinct architectures and loss functions for different vision tasks. In this work we show that a diverse set of "core" computer vision tasks can also be unified if formulated in terms of a shared pixel-to-sequence interface. We focus on four tasks, namely, object detection, instance segmentation, keypoint detection, and image captioning, all with diverse types of outputs, e.g., bounding boxes or dense masks. Despite that, by formulating the output of each task as a sequence of discrete tokens with a unified interface, we show that one can train a neural network with a single model architecture and loss function on all these tasks, with no task-specific customization. To solve a specific task, we use a short prompt as task description, and the sequence output adapts to the prompt so it can produce task-specific output. We show that such a model can achieve competitive performance compared to well-established task-specific models.

* The first three authors contributed equally 
Viaarxiv icon

NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields

Mar 03, 2022
Lin Yen-Chen, Pete Florence, Jonathan T. Barron, Tsung-Yi Lin, Alberto Rodriguez, Phillip Isola

Figure 1 for NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields
Figure 2 for NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields
Figure 3 for NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields
Figure 4 for NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields

Thin, reflective objects such as forks and whisks are common in our daily lives, but they are particularly challenging for robot perception because it is hard to reconstruct them using commodity RGB-D cameras or multi-view stereo techniques. While traditional pipelines struggle with objects like these, Neural Radiance Fields (NeRFs) have recently been shown to be remarkably effective for performing view synthesis on objects with thin structures or reflective materials. In this paper we explore the use of NeRF as a new source of supervision for robust robot vision systems. In particular, we demonstrate that a NeRF representation of a scene can be used to train dense object descriptors. We use an optimized NeRF to extract dense correspondences between multiple views of an object, and then use these correspondences as training data for learning a view-invariant representation of the object. NeRF's usage of a density field allows us to reformulate the correspondence problem with a novel distribution-of-depths formulation, as opposed to the conventional approach of using a depth map. Dense correspondence models supervised with our method significantly outperform off-the-shelf learned descriptors by 106% (PCK@3px metric, more than doubling performance) and outperform our baseline supervised with multi-view stereo by 29%. Furthermore, we demonstrate the learned dense descriptors enable robots to perform accurate 6-degree of freedom (6-DoF) pick and place of thin and reflective objects.

* ICRA 2022, Website: https://yenchenlin.me/nerf-supervision/ 
Viaarxiv icon

Open-Vocabulary Image Segmentation

Dec 22, 2021
Golnaz Ghiasi, Xiuye Gu, Yin Cui, Tsung-Yi Lin

Figure 1 for Open-Vocabulary Image Segmentation
Figure 2 for Open-Vocabulary Image Segmentation
Figure 3 for Open-Vocabulary Image Segmentation
Figure 4 for Open-Vocabulary Image Segmentation

We design an open-vocabulary image segmentation model to organize an image into meaningful regions indicated by arbitrary texts. We identify that recent open-vocabulary models can not localize visual concepts well despite recognizing what are in an image. We argue that these models miss an important step of visual grouping, which organizes pixels into groups before learning visual-semantic alignments. We propose OpenSeg to address the above issue. First, it learns to propose segmentation masks for possible organizations. Then it learns visual-semantic alignments by aligning each word in a caption to one or a few predicted masks. We find the mask representations are the key to support learning from captions, making it possible to scale up the dataset and vocabulary sizes. Our work is the first to perform zero-shot transfer on holdout segmentation datasets. We set up two strong baselines by applying class activation maps or fine-tuning with pixel-wise labels on a pre-trained ALIGN model. OpenSeg outperforms these baselines by 3.4 mIoU on PASCAL-Context (459 classes) and 2.7 mIoU on ADE-20k (847 classes).

Viaarxiv icon

A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation

Dec 17, 2021
Wuyang Chen, Xianzhi Du, Fan Yang, Lucas Beyer, Xiaohua Zhai, Tsung-Yi Lin, Huizhong Chen, Jing Li, Xiaodan Song, Zhangyang Wang, Denny Zhou

Figure 1 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Figure 2 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Figure 3 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Figure 4 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation

This work presents a simple vision transformer design as a strong baseline for object localization and instance segmentation tasks. Transformers recently demonstrate competitive performance in image classification tasks. To adopt ViT to object detection and dense prediction tasks, many works inherit the multistage design from convolutional networks and highly customized ViT architectures. Behind this design, the goal is to pursue a better trade-off between computational cost and effective aggregation of multiscale global contexts. However, existing works adopt the multistage architectural design as a black-box solution without a clear understanding of its true benefits. In this paper, we comprehensively study three architecture design choices on ViT -- spatial reduction, doubled channels, and multiscale features -- and demonstrate that a vanilla ViT architecture can fulfill this goal without handcrafting multiscale features, maintaining the original ViT design philosophy. We further complete a scaling rule to optimize our model's trade-off on accuracy and computation cost / model size. By leveraging a constant feature resolution and hidden size throughout the encoder blocks, we propose a simple and compact ViT architecture called Universal Vision Transformer (UViT) that achieves strong performance on COCO object detection and instance segmentation tasks.

Viaarxiv icon

Multi-Task Self-Training for Learning General Representations

Aug 25, 2021
Golnaz Ghiasi, Barret Zoph, Ekin D. Cubuk, Quoc V. Le, Tsung-Yi Lin

Figure 1 for Multi-Task Self-Training for Learning General Representations
Figure 2 for Multi-Task Self-Training for Learning General Representations
Figure 3 for Multi-Task Self-Training for Learning General Representations
Figure 4 for Multi-Task Self-Training for Learning General Representations

Despite the fast progress in training specialized models for various tasks, learning a single general model that works well for many tasks is still challenging for computer vision. Here we introduce multi-task self-training (MuST), which harnesses the knowledge in independent specialized teacher models (e.g., ImageNet model on classification) to train a single general student model. Our approach has three steps. First, we train specialized teachers independently on labeled datasets. We then use the specialized teachers to label an unlabeled dataset to create a multi-task pseudo labeled dataset. Finally, the dataset, which now contains pseudo labels from teacher models trained on different datasets/tasks, is then used to train a student model with multi-task learning. We evaluate the feature representations of the student model on 6 vision tasks including image recognition (classification, detection, segmentation)and 3D geometry estimation (depth and surface normal estimation). MuST is scalable with unlabeled or partially labeled datasets and outperforms both specialized supervised models and self-supervised models when training on large scale datasets. Lastly, we show MuST can improve upon already strong checkpoints trained with billions of examples. The results suggest self-training is a promising direction to aggregate labeled and unlabeled training data for learning general feature representations.

* ICCV 2021 
Viaarxiv icon