Alert button
Picture for Zhangyang Wang

Zhangyang Wang

Alert button

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity

Add code
Bookmark button
Alert button
Jul 07, 2022
Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Mykola Pechenizkiy, Decebal Mocanu, Zhangyang Wang

Figure 1 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 2 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 3 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 4 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Viaarxiv icon

How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts

Add code
Bookmark button
Alert button
Jul 04, 2022
Haotao Wang, Junyuan Hong, Jiayu Zhou, Zhangyang Wang

Figure 1 for How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts
Figure 2 for How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts
Figure 3 for How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts
Figure 4 for How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts
Viaarxiv icon

Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations

Add code
Bookmark button
Alert button
Jul 04, 2022
Tianlong Chen, Peihao Wang, Zhiwen Fan, Zhangyang Wang

Figure 1 for Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations
Figure 2 for Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations
Figure 3 for Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations
Figure 4 for Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations
Viaarxiv icon

Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition

Add code
Bookmark button
Alert button
Jul 04, 2022
Haotao Wang, Aston Zhang, Yi Zhu, Shuai Zheng, Mu Li, Alex Smola, Zhangyang Wang

Figure 1 for Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition
Figure 2 for Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition
Figure 3 for Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition
Figure 4 for Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition
Viaarxiv icon

Removing Batch Normalization Boosts Adversarial Training

Add code
Bookmark button
Alert button
Jul 04, 2022
Haotao Wang, Aston Zhang, Shuai Zheng, Xingjian Shi, Mu Li, Zhangyang Wang

Figure 1 for Removing Batch Normalization Boosts Adversarial Training
Figure 2 for Removing Batch Normalization Boosts Adversarial Training
Figure 3 for Removing Batch Normalization Boosts Adversarial Training
Figure 4 for Removing Batch Normalization Boosts Adversarial Training
Viaarxiv icon

Training Your Sparse Neural Network Better with Any Mask

Add code
Bookmark button
Alert button
Jun 28, 2022
Ajay Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, Zhangyang Wang

Figure 1 for Training Your Sparse Neural Network Better with Any Mask
Figure 2 for Training Your Sparse Neural Network Better with Any Mask
Figure 3 for Training Your Sparse Neural Network Better with Any Mask
Figure 4 for Training Your Sparse Neural Network Better with Any Mask
Viaarxiv icon

Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning

Add code
Bookmark button
Alert button
Jun 17, 2022
Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang

Figure 1 for Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning
Figure 2 for Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning
Figure 3 for Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning
Figure 4 for Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning
Viaarxiv icon

Can pruning improve certified robustness of neural networks?

Add code
Bookmark button
Alert button
Jun 17, 2022
Zhangheng Li, Tianlong Chen, Linyi Li, Bo Li, Zhangyang Wang

Figure 1 for Can pruning improve certified robustness of neural networks?
Figure 2 for Can pruning improve certified robustness of neural networks?
Figure 3 for Can pruning improve certified robustness of neural networks?
Figure 4 for Can pruning improve certified robustness of neural networks?
Viaarxiv icon

Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness

Add code
Bookmark button
Alert button
Jun 15, 2022
Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang

Figure 1 for Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness
Figure 2 for Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness
Figure 3 for Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness
Figure 4 for Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness
Viaarxiv icon

A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and Ground Truth

Add code
Bookmark button
Alert button
Jun 13, 2022
Priya Narayanan, Xin Hu, Zhenyu Wu, Matthew D Thielke, John G Rogers, Andre V Harrison, John A D'Agostino, James D Brown, Long P Quang, James R Uplinger, Heesung Kwon, Zhangyang Wang

Figure 1 for A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and Ground Truth
Figure 2 for A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and Ground Truth
Figure 3 for A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and Ground Truth
Figure 4 for A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and Ground Truth
Viaarxiv icon