Alert button
Picture for Ziquan Liu

Ziquan Liu

Alert button

Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data Scarcity

Add code
Bookmark button
Alert button
Mar 26, 2024
Zhuo Zhi, Ziquan Liu, Moe Elbadawi, Adam Daneshmend, Mine Orlu, Abdul Basit, Andreas Demosthenous, Miguel Rodrigues

Figure 1 for Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data Scarcity
Figure 2 for Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data Scarcity
Figure 3 for Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data Scarcity
Figure 4 for Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data Scarcity
Viaarxiv icon

PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks

Add code
Bookmark button
Alert button
Feb 04, 2024
Ziquan Liu, Zhuo Zhi, Ilija Bogunovic, Carsten Gerner-Beuerle, Miguel Rodrigues

Viaarxiv icon

DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks

Add code
Bookmark button
Alert button
Apr 07, 2023
Qiangqiang Wu, Tianyu Yang, Ziquan Liu, Baoyuan Wu, Ying Shan, Antoni B. Chan

Figure 1 for DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks
Figure 2 for DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks
Figure 3 for DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks
Figure 4 for DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks
Viaarxiv icon

TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization

Add code
Bookmark button
Alert button
Mar 20, 2023
Ziquan Liu, Yi Xu, Xiangyang Ji, Antoni B. Chan

Figure 1 for TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization
Figure 2 for TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization
Figure 3 for TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization
Figure 4 for TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization
Viaarxiv icon

Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization

Add code
Bookmark button
Alert button
Oct 11, 2022
Ziquan Liu, Antoni B. Chan

Figure 1 for Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization
Figure 2 for Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization
Figure 3 for Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization
Figure 4 for Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization
Viaarxiv icon

An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation

Add code
Bookmark button
Alert button
May 25, 2022
Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Rong Jin, Xiangyang Ji, Antoni B. Chan

Figure 1 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Figure 2 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Figure 3 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Figure 4 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Viaarxiv icon

Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice

Add code
Bookmark button
Alert button
Nov 24, 2021
Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Antoni Chan, Rong Jin

Figure 1 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Figure 2 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Figure 3 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Figure 4 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Viaarxiv icon

The Implicit Biases of Stochastic Gradient Descent on Deep Neural Networks with Batch Normalization

Add code
Bookmark button
Alert button
Feb 06, 2021
Ziquan Liu, Yufei Cui, Jia Wan, Yu Mao, Antoni B. Chan

Figure 1 for The Implicit Biases of Stochastic Gradient Descent on Deep Neural Networks with Batch Normalization
Figure 2 for The Implicit Biases of Stochastic Gradient Descent on Deep Neural Networks with Batch Normalization
Figure 3 for The Implicit Biases of Stochastic Gradient Descent on Deep Neural Networks with Batch Normalization
Figure 4 for The Implicit Biases of Stochastic Gradient Descent on Deep Neural Networks with Batch Normalization
Viaarxiv icon