Alert button
Picture for Zhangyang Wang

Zhangyang Wang

Alert button

Versatile Diffusion: Text, Images and Variations All in One Diffusion Model

Add code
Bookmark button
Alert button
Nov 20, 2022
Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi

Figure 1 for Versatile Diffusion: Text, Images and Variations All in One Diffusion Model
Figure 2 for Versatile Diffusion: Text, Images and Variations All in One Diffusion Model
Figure 3 for Versatile Diffusion: Text, Images and Variations All in One Diffusion Model
Figure 4 for Versatile Diffusion: Text, Images and Variations All in One Diffusion Model
Viaarxiv icon

Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

Add code
Bookmark button
Alert button
Nov 19, 2022
Zhenglun Kong, Haoyu Ma, Geng Yuan, Mengshu Sun, Yanyue Xie, Peiyan Dong, Xin Meng, Xuan Shen, Hao Tang, Minghai Qin, Tianlong Chen, Xiaolong Ma, Xiaohui Xie, Zhangyang Wang, Yanzhi Wang

Figure 1 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 2 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 3 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 4 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Viaarxiv icon

AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training

Add code
Bookmark button
Alert button
Nov 17, 2022
Yifan Jiang, Peter Hedman, Ben Mildenhall, Dejia Xu, Jonathan T. Barron, Zhangyang Wang, Tianfan Xue

Figure 1 for AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training
Figure 2 for AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training
Figure 3 for AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training
Figure 4 for AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training
Viaarxiv icon

StyleNAT: Giving Each Head a New Perspective

Add code
Bookmark button
Alert button
Nov 10, 2022
Steven Walton, Ali Hassani, Xingqian Xu, Zhangyang Wang, Humphrey Shi

Figure 1 for StyleNAT: Giving Each Head a New Perspective
Figure 2 for StyleNAT: Giving Each Head a New Perspective
Figure 3 for StyleNAT: Giving Each Head a New Perspective
Figure 4 for StyleNAT: Giving Each Head a New Perspective
Viaarxiv icon

QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks

Add code
Bookmark button
Alert button
Nov 09, 2022
Kaixiong Zhou, Zhenyu Zhang, Shengyuan Chen, Tianlong Chen, Xiao Huang, Zhangyang Wang, Xia Hu

Figure 1 for QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks
Figure 2 for QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks
Figure 3 for QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks
Figure 4 for QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks
Viaarxiv icon

Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization

Add code
Bookmark button
Alert button
Nov 03, 2022
Junru Wu, Yi Liang, Feng Han, Hassan Akbari, Zhangyang Wang, Cong Yu

Figure 1 for Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization
Figure 2 for Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization
Figure 3 for Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization
Figure 4 for Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization
Viaarxiv icon

M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design

Add code
Bookmark button
Alert button
Oct 26, 2022
Hanxue Liang, Zhiwen Fan, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang

Figure 1 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 2 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 3 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 4 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Viaarxiv icon

Symbolic Distillation for Learned TCP Congestion Control

Add code
Bookmark button
Alert button
Oct 24, 2022
S P Sharan, Wenqing Zheng, Kuo-Feng Hsu, Jiarong Xing, Ang Chen, Zhangyang Wang

Figure 1 for Symbolic Distillation for Learned TCP Congestion Control
Figure 2 for Symbolic Distillation for Learned TCP Congestion Control
Figure 3 for Symbolic Distillation for Learned TCP Congestion Control
Figure 4 for Symbolic Distillation for Learned TCP Congestion Control
Viaarxiv icon

Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices

Add code
Bookmark button
Alert button
Oct 18, 2022
Yimeng Zhang, Akshay Karkal Kamath, Qiucheng Wu, Zhiwen Fan, Wuyang Chen, Zhangyang Wang, Shiyu Chang, Sijia Liu, Cong Hao

Figure 1 for Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices
Figure 2 for Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices
Figure 3 for Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices
Figure 4 for Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices
Viaarxiv icon

Signal Processing for Implicit Neural Representations

Add code
Bookmark button
Alert button
Oct 17, 2022
Dejia Xu, Peihao Wang, Yifan Jiang, Zhiwen Fan, Zhangyang Wang

Figure 1 for Signal Processing for Implicit Neural Representations
Figure 2 for Signal Processing for Implicit Neural Representations
Figure 3 for Signal Processing for Implicit Neural Representations
Figure 4 for Signal Processing for Implicit Neural Representations
Viaarxiv icon