Picture for Zhangyang Wang

Zhangyang Wang

Atlas

Search Behavior Prediction: A Hypergraph Perspective

Add code
Nov 29, 2022
Figure 1 for Search Behavior Prediction: A Hypergraph Perspective
Figure 2 for Search Behavior Prediction: A Hypergraph Perspective
Figure 3 for Search Behavior Prediction: A Hypergraph Perspective
Figure 4 for Search Behavior Prediction: A Hypergraph Perspective
Viaarxiv icon

NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° Views

Add code
Nov 29, 2022
Viaarxiv icon

Versatile Diffusion: Text, Images and Variations All in One Diffusion Model

Add code
Nov 20, 2022
Viaarxiv icon

Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

Add code
Nov 19, 2022
Figure 1 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 2 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 3 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 4 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Viaarxiv icon

AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training

Add code
Nov 17, 2022
Viaarxiv icon

StyleNAT: Giving Each Head a New Perspective

Add code
Nov 10, 2022
Figure 1 for StyleNAT: Giving Each Head a New Perspective
Figure 2 for StyleNAT: Giving Each Head a New Perspective
Figure 3 for StyleNAT: Giving Each Head a New Perspective
Figure 4 for StyleNAT: Giving Each Head a New Perspective
Viaarxiv icon

QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks

Add code
Nov 09, 2022
Viaarxiv icon

Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization

Add code
Nov 03, 2022
Figure 1 for Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization
Figure 2 for Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization
Figure 3 for Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization
Figure 4 for Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization
Viaarxiv icon

M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design

Add code
Oct 26, 2022
Figure 1 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 2 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 3 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 4 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Viaarxiv icon

Symbolic Distillation for Learned TCP Congestion Control

Add code
Oct 24, 2022
Figure 1 for Symbolic Distillation for Learned TCP Congestion Control
Figure 2 for Symbolic Distillation for Learned TCP Congestion Control
Figure 3 for Symbolic Distillation for Learned TCP Congestion Control
Figure 4 for Symbolic Distillation for Learned TCP Congestion Control
Viaarxiv icon