Alert button
Picture for Tianlong Chen

Tianlong Chen

Alert button

Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

Nov 19, 2022
Zhenglun Kong, Haoyu Ma, Geng Yuan, Mengshu Sun, Yanyue Xie, Peiyan Dong, Xin Meng, Xuan Shen, Hao Tang, Minghai Qin, Tianlong Chen, Xiaolong Ma, Xiaohui Xie, Zhangyang Wang, Yanzhi Wang

Figure 1 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 2 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 3 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 4 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Viaarxiv icon

QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks

Nov 09, 2022
Kaixiong Zhou, Zhenyu Zhang, Shengyuan Chen, Tianlong Chen, Xiao Huang, Zhangyang Wang, Xia Hu

Figure 1 for QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks
Figure 2 for QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks
Figure 3 for QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks
Figure 4 for QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional Networks
Viaarxiv icon

M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design

Oct 26, 2022
Hanxue Liang, Zhiwen Fan, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang

Figure 1 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 2 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 3 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Figure 4 for M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Viaarxiv icon

Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again

Oct 14, 2022
Ajay Jaiswal, Peihao Wang, Tianlong Chen, Justin F. Rousseau, Ying Ding, Zhangyang Wang

Figure 1 for Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again
Figure 2 for Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again
Figure 3 for Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again
Figure 4 for Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again
Viaarxiv icon

A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking

Oct 14, 2022
Keyu Duan, Zirui Liu, Peihao Wang, Wenqing Zheng, Kaixiong Zhou, Tianlong Chen, Xia Hu, Zhangyang Wang

Figure 1 for A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking
Figure 2 for A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking
Figure 3 for A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking
Figure 4 for A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking
Viaarxiv icon

Advancing Model Pruning via Bi-level Optimization

Oct 12, 2022
Yihua Zhang, Yuguang Yao, Parikshit Ram, Pu Zhao, Tianlong Chen, Mingyi Hong, Yanzhi Wang, Sijia Liu

Figure 1 for Advancing Model Pruning via Bi-level Optimization
Figure 2 for Advancing Model Pruning via Bi-level Optimization
Figure 3 for Advancing Model Pruning via Bi-level Optimization
Figure 4 for Advancing Model Pruning via Bi-level Optimization
Viaarxiv icon

Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative

Oct 07, 2022
Tianxin Wei, Yuning You, Tianlong Chen, Yang Shen, Jingrui He, Zhangyang Wang

Figure 1 for Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative
Figure 2 for Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative
Figure 3 for Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative
Figure 4 for Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative
Viaarxiv icon

Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer?

Sep 18, 2022
Yi Wang, Zhiwen Fan, Tianlong Chen, Hehe Fan, Zhangyang Wang

Figure 1 for Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer?
Figure 2 for Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer?
Figure 3 for Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer?
Figure 4 for Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer?
Viaarxiv icon

Is Attention All NeRF Needs?

Jul 27, 2022
Mukund Varma T, Peihao Wang, Xuxi Chen, Tianlong Chen, Subhashini Venugopalan, Zhangyang Wang

Figure 1 for Is Attention All NeRF Needs?
Figure 2 for Is Attention All NeRF Needs?
Figure 3 for Is Attention All NeRF Needs?
Figure 4 for Is Attention All NeRF Needs?
Viaarxiv icon

Neural Implicit Dictionary via Mixture-of-Expert Training

Jul 08, 2022
Peihao Wang, Zhiwen Fan, Tianlong Chen, Zhangyang Wang

Figure 1 for Neural Implicit Dictionary via Mixture-of-Expert Training
Figure 2 for Neural Implicit Dictionary via Mixture-of-Expert Training
Figure 3 for Neural Implicit Dictionary via Mixture-of-Expert Training
Figure 4 for Neural Implicit Dictionary via Mixture-of-Expert Training
Viaarxiv icon