Alert button
Picture for Zhangyang Wang

Zhangyang Wang

Alert button

Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better

Add code
Bookmark button
Alert button
Dec 18, 2021
Sameer Bibikar, Haris Vikalo, Zhangyang Wang, Xiaohan Chen

Figure 1 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 2 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 3 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 4 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Viaarxiv icon

A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation

Add code
Bookmark button
Alert button
Dec 17, 2021
Wuyang Chen, Xianzhi Du, Fan Yang, Lucas Beyer, Xiaohua Zhai, Tsung-Yi Lin, Huizhong Chen, Jing Li, Xiaodan Song, Zhangyang Wang, Denny Zhou

Figure 1 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Figure 2 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Figure 3 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Figure 4 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Viaarxiv icon

Auto-X3D: Ultra-Efficient Video Understanding via Finer-Grained Neural Architecture Search

Add code
Bookmark button
Alert button
Dec 09, 2021
Yifan Jiang, Xinyu Gong, Junru Wu, Humphrey Shi, Zhicheng Yan, Zhangyang Wang

Figure 1 for Auto-X3D: Ultra-Efficient Video Understanding via Finer-Grained Neural Architecture Search
Figure 2 for Auto-X3D: Ultra-Efficient Video Understanding via Finer-Grained Neural Architecture Search
Figure 3 for Auto-X3D: Ultra-Efficient Video Understanding via Finer-Grained Neural Architecture Search
Figure 4 for Auto-X3D: Ultra-Efficient Video Understanding via Finer-Grained Neural Architecture Search
Viaarxiv icon

Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods

Add code
Bookmark button
Alert button
Nov 10, 2021
Wenqing Zheng, Edward W Huang, Nikhil Rao, Sumeet Katariya, Zhangyang Wang, Karthik Subbian

Figure 1 for Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods
Figure 2 for Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods
Figure 3 for Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods
Figure 4 for Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods
Viaarxiv icon

Improving Contrastive Learning on Imbalanced Seed Data via Open-World Sampling

Add code
Bookmark button
Alert button
Nov 01, 2021
Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang

Figure 1 for Improving Contrastive Learning on Imbalanced Seed Data via Open-World Sampling
Figure 2 for Improving Contrastive Learning on Imbalanced Seed Data via Open-World Sampling
Figure 3 for Improving Contrastive Learning on Imbalanced Seed Data via Open-World Sampling
Figure 4 for Improving Contrastive Learning on Imbalanced Seed Data via Open-World Sampling
Viaarxiv icon

You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership

Add code
Bookmark button
Alert button
Oct 30, 2021
Xuxi Chen, Tianlong Chen, Zhenyu Zhang, Zhangyang Wang

Figure 1 for You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership
Figure 2 for You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership
Figure 3 for You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership
Figure 4 for You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership
Viaarxiv icon

DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

Add code
Bookmark button
Alert button
Oct 30, 2021
Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Chen, Zhangyang Wang, Ahmed Hassan Awadallah

Figure 1 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Figure 2 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Figure 3 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Figure 4 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Viaarxiv icon

Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems

Add code
Bookmark button
Alert button
Oct 29, 2021
Wenqing Zheng, Qiangqiang Guo, Hao Yang, Peihao Wang, Zhangyang Wang

Figure 1 for Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems
Figure 2 for Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems
Figure 3 for Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems
Figure 4 for Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems
Viaarxiv icon

Hyperparameter Tuning is All You Need for LISTA

Add code
Bookmark button
Alert button
Oct 29, 2021
Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin

Figure 1 for Hyperparameter Tuning is All You Need for LISTA
Figure 2 for Hyperparameter Tuning is All You Need for LISTA
Figure 3 for Hyperparameter Tuning is All You Need for LISTA
Figure 4 for Hyperparameter Tuning is All You Need for LISTA
Viaarxiv icon

AugMax: Adversarial Composition of Random Augmentations for Robust Training

Add code
Bookmark button
Alert button
Oct 26, 2021
Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, Zhangyang Wang

Figure 1 for AugMax: Adversarial Composition of Random Augmentations for Robust Training
Figure 2 for AugMax: Adversarial Composition of Random Augmentations for Robust Training
Figure 3 for AugMax: Adversarial Composition of Random Augmentations for Robust Training
Figure 4 for AugMax: Adversarial Composition of Random Augmentations for Robust Training
Viaarxiv icon