Alert button
Picture for Ming-Hsuan Yang

Ming-Hsuan Yang

Alert button

Autoregressive 3D Shape Generation via Canonical Mapping

Add code
Bookmark button
Alert button
Apr 05, 2022
An-Chieh Cheng, Xueting Li, Sifei Liu, Min Sun, Ming-Hsuan Yang

Figure 1 for Autoregressive 3D Shape Generation via Canonical Mapping
Figure 2 for Autoregressive 3D Shape Generation via Canonical Mapping
Figure 3 for Autoregressive 3D Shape Generation via Canonical Mapping
Figure 4 for Autoregressive 3D Shape Generation via Canonical Mapping
Viaarxiv icon

Animatable Neural Radiance Fields from Monocular RGB-D

Add code
Bookmark button
Alert button
Apr 04, 2022
Tiantian Wang, Nikolaos Sarafianos, Ming-Hsuan Yang, Tony Tung

Figure 1 for Animatable Neural Radiance Fields from Monocular RGB-D
Figure 2 for Animatable Neural Radiance Fields from Monocular RGB-D
Figure 3 for Animatable Neural Radiance Fields from Monocular RGB-D
Figure 4 for Animatable Neural Radiance Fields from Monocular RGB-D
Viaarxiv icon

Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing

Add code
Bookmark button
Alert button
Mar 23, 2022
Hsin-Ping Huang, Deqing Sun, Yaojie Liu, Wen-Sheng Chu, Taihong Xiao, Jinwei Yuan, Hartwig Adam, Ming-Hsuan Yang

Figure 1 for Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing
Figure 2 for Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing
Figure 3 for Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing
Figure 4 for Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing
Viaarxiv icon

V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer

Add code
Bookmark button
Alert button
Mar 20, 2022
Runsheng Xu, Hao Xiang, Zhengzhong Tu, Xin Xia, Ming-Hsuan Yang, Jiaqi Ma

Figure 1 for V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer
Figure 2 for V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer
Figure 3 for V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer
Figure 4 for V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer
Viaarxiv icon

Deep Image Deblurring: A Survey

Add code
Bookmark button
Alert button
Jan 26, 2022
Kaihao Zhang, Wenqi Ren, Wenhan Luo, Wei-Sheng Lai, Bjorn Stenger, Ming-Hsuan Yang, Hongdong Li

Figure 1 for Deep Image Deblurring: A Survey
Figure 2 for Deep Image Deblurring: A Survey
Figure 3 for Deep Image Deblurring: A Survey
Figure 4 for Deep Image Deblurring: A Survey
Viaarxiv icon

Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text

Add code
Bookmark button
Alert button
Dec 14, 2021
Qing Li, Boqing Gong, Yin Cui, Dan Kondratyuk, Xianzhi Du, Ming-Hsuan Yang, Matthew Brown

Figure 1 for Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text
Figure 2 for Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text
Figure 3 for Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text
Figure 4 for Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text
Viaarxiv icon

An Informative Tracking Benchmark

Add code
Bookmark button
Alert button
Dec 13, 2021
Xin Li, Qiao Liu, Wenjie Pei, Qiuhong Shen, Yaowei Wang, Huchuan Lu, Ming-Hsuan Yang

Figure 1 for An Informative Tracking Benchmark
Figure 2 for An Informative Tracking Benchmark
Figure 3 for An Informative Tracking Benchmark
Figure 4 for An Informative Tracking Benchmark
Viaarxiv icon

Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision

Add code
Bookmark button
Alert button
Dec 09, 2021
Liangzhe Yuan, Rui Qian, Yin Cui, Boqing Gong, Florian Schroff, Ming-Hsuan Yang, Hartwig Adam, Ting Liu

Figure 1 for Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision
Figure 2 for Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision
Figure 3 for Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision
Figure 4 for Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision
Viaarxiv icon

Exploring Temporal Granularity in Self-Supervised Video Representation Learning

Add code
Bookmark button
Alert button
Dec 08, 2021
Rui Qian, Yeqing Li, Liangzhe Yuan, Boqing Gong, Ting Liu, Matthew Brown, Serge Belongie, Ming-Hsuan Yang, Hartwig Adam, Yin Cui

Figure 1 for Exploring Temporal Granularity in Self-Supervised Video Representation Learning
Figure 2 for Exploring Temporal Granularity in Self-Supervised Video Representation Learning
Figure 3 for Exploring Temporal Granularity in Self-Supervised Video Representation Learning
Figure 4 for Exploring Temporal Granularity in Self-Supervised Video Representation Learning
Viaarxiv icon

Benchmarking Deep Deblurring Algorithms: A Large-Scale Multi-Cause Dataset and A New Baseline Model

Add code
Bookmark button
Alert button
Dec 01, 2021
Kaihao Zhang, Wenhan Luo, Boheng Chen, Wenqi Ren, Bjorn Stenger, Wei Liu, Hongdong Li, Ming-Hsuan Yang

Figure 1 for Benchmarking Deep Deblurring Algorithms: A Large-Scale Multi-Cause Dataset and A New Baseline Model
Figure 2 for Benchmarking Deep Deblurring Algorithms: A Large-Scale Multi-Cause Dataset and A New Baseline Model
Figure 3 for Benchmarking Deep Deblurring Algorithms: A Large-Scale Multi-Cause Dataset and A New Baseline Model
Figure 4 for Benchmarking Deep Deblurring Algorithms: A Large-Scale Multi-Cause Dataset and A New Baseline Model
Viaarxiv icon