Alert button
Picture for Haizhong Zheng

Haizhong Zheng

Alert button

Adaptive Skeleton Graph Decoding

Add code
Bookmark button
Alert button
Feb 19, 2024
Shuowei Jin, Yongji Wu, Haizhong Zheng, Qingzhao Zhang, Matthew Lentz, Z. Morley Mao, Atul Prakash, Feng Qian, Danyang Zhuo

Viaarxiv icon

Learn To be Efficient: Build Structured Sparsity in Large Language Models

Add code
Bookmark button
Alert button
Feb 13, 2024
Haizhong Zheng, Xiaoyan Bai, Beidi Chen, Fan Lai, Atul Prakash

Viaarxiv icon

Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation

Add code
Bookmark button
Alert button
Oct 11, 2023
Haizhong Zheng, Jiachen Sun, Shutong Wu, Bhavya Kailkhura, Zhuoqing Mao, Chaowei Xiao, Atul Prakash

Figure 1 for Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
Figure 2 for Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
Figure 3 for Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
Figure 4 for Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
Viaarxiv icon

CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception

Add code
Bookmark button
Alert button
Jun 01, 2023
Jiachen Sun, Haizhong Zheng, Qingzhao Zhang, Atul Prakash, Z. Morley Mao, Chaowei Xiao

Figure 1 for CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception
Figure 2 for CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception
Figure 3 for CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception
Figure 4 for CALICO: Self-Supervised Camera-LiDAR Contrastive Pre-training for BEV Perception
Viaarxiv icon

Coverage-centric Coreset Selection for High Pruning Rates

Add code
Bookmark button
Alert button
Oct 28, 2022
Haizhong Zheng, Rui Liu, Fan Lai, Atul Prakash

Figure 1 for Coverage-centric Coreset Selection for High Pruning Rates
Figure 2 for Coverage-centric Coreset Selection for High Pruning Rates
Figure 3 for Coverage-centric Coreset Selection for High Pruning Rates
Figure 4 for Coverage-centric Coreset Selection for High Pruning Rates
Viaarxiv icon

Understanding and Diagnosing Vulnerability under Adversarial Attacks

Add code
Bookmark button
Alert button
Jul 17, 2020
Haizhong Zheng, Ziqi Zhang, Honglak Lee, Atul Prakash

Figure 1 for Understanding and Diagnosing Vulnerability under Adversarial Attacks
Figure 2 for Understanding and Diagnosing Vulnerability under Adversarial Attacks
Figure 3 for Understanding and Diagnosing Vulnerability under Adversarial Attacks
Figure 4 for Understanding and Diagnosing Vulnerability under Adversarial Attacks
Viaarxiv icon

Efficient Adversarial Training with Transferable Adversarial Examples

Add code
Bookmark button
Alert button
Dec 27, 2019
Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, Atul Prakash

Figure 1 for Efficient Adversarial Training with Transferable Adversarial Examples
Figure 2 for Efficient Adversarial Training with Transferable Adversarial Examples
Figure 3 for Efficient Adversarial Training with Transferable Adversarial Examples
Figure 4 for Efficient Adversarial Training with Transferable Adversarial Examples
Viaarxiv icon

Robust Classification using Robust Feature Augmentation

Add code
Bookmark button
Alert button
May 31, 2019
Kevin Eykholt, Swati Gupta, Atul Prakash, Haizhong Zheng

Figure 1 for Robust Classification using Robust Feature Augmentation
Figure 2 for Robust Classification using Robust Feature Augmentation
Figure 3 for Robust Classification using Robust Feature Augmentation
Figure 4 for Robust Classification using Robust Feature Augmentation
Viaarxiv icon

Analyzing the Interpretability Robustness of Self-Explaining Models

Add code
Bookmark button
Alert button
May 27, 2019
Haizhong Zheng, Earlence Fernandes, Atul Prakash

Figure 1 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 2 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 3 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 4 for Analyzing the Interpretability Robustness of Self-Explaining Models
Viaarxiv icon