Alert button
Picture for Xiangning Chen

Xiangning Chen

Alert button

Why Does Sharpness-Aware Minimization Generalize Better Than SGD?

Add code
Bookmark button
Alert button
Oct 11, 2023
Zixiang Chen, Junkai Zhang, Yiwen Kou, Xiangning Chen, Cho-Jui Hsieh, Quanquan Gu

Figure 1 for Why Does Sharpness-Aware Minimization Generalize Better Than SGD?
Figure 2 for Why Does Sharpness-Aware Minimization Generalize Better Than SGD?
Figure 3 for Why Does Sharpness-Aware Minimization Generalize Better Than SGD?
Figure 4 for Why Does Sharpness-Aware Minimization Generalize Better Than SGD?
Viaarxiv icon

Red Teaming Language Model Detectors with Language Models

Add code
Bookmark button
Alert button
May 31, 2023
Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, Cho-Jui Hsieh

Figure 1 for Red Teaming Language Model Detectors with Language Models
Figure 2 for Red Teaming Language Model Detectors with Language Models
Figure 3 for Red Teaming Language Model Detectors with Language Models
Figure 4 for Red Teaming Language Model Detectors with Language Models
Viaarxiv icon

Symbol tuning improves in-context learning in language models

Add code
Bookmark button
Alert button
May 15, 2023
Jerry Wei, Le Hou, Andrew Lampinen, Xiangning Chen, Da Huang, Yi Tay, Xinyun Chen, Yifeng Lu, Denny Zhou, Tengyu Ma, Quoc V. Le

Figure 1 for Symbol tuning improves in-context learning in language models
Figure 2 for Symbol tuning improves in-context learning in language models
Figure 3 for Symbol tuning improves in-context learning in language models
Figure 4 for Symbol tuning improves in-context learning in language models
Viaarxiv icon

Symbolic Discovery of Optimization Algorithms

Add code
Bookmark button
Alert button
Feb 17, 2023
Xiangning Chen, Chen Liang, Da Huang, Esteban Real, Kaiyuan Wang, Yao Liu, Hieu Pham, Xuanyi Dong, Thang Luong, Cho-Jui Hsieh, Yifeng Lu, Quoc V. Le

Figure 1 for Symbolic Discovery of Optimization Algorithms
Figure 2 for Symbolic Discovery of Optimization Algorithms
Figure 3 for Symbolic Discovery of Optimization Algorithms
Figure 4 for Symbolic Discovery of Optimization Algorithms
Viaarxiv icon

Towards Efficient and Scalable Sharpness-Aware Minimization

Add code
Bookmark button
Alert button
Mar 05, 2022
Yong Liu, Siqi Mai, Xiangning Chen, Cho-Jui Hsieh, Yang You

Figure 1 for Towards Efficient and Scalable Sharpness-Aware Minimization
Figure 2 for Towards Efficient and Scalable Sharpness-Aware Minimization
Figure 3 for Towards Efficient and Scalable Sharpness-Aware Minimization
Figure 4 for Towards Efficient and Scalable Sharpness-Aware Minimization
Viaarxiv icon

Can Vision Transformers Perform Convolution?

Add code
Bookmark button
Alert button
Nov 03, 2021
Shanda Li, Xiangning Chen, Di He, Cho-Jui Hsieh

Figure 1 for Can Vision Transformers Perform Convolution?
Figure 2 for Can Vision Transformers Perform Convolution?
Figure 3 for Can Vision Transformers Perform Convolution?
Viaarxiv icon

RANK-NOSH: Efficient Predictor-Based Architecture Search via Non-Uniform Successive Halving

Add code
Bookmark button
Alert button
Aug 18, 2021
Ruochen Wang, Xiangning Chen, Minhao Cheng, Xiaocheng Tang, Cho-Jui Hsieh

Figure 1 for RANK-NOSH: Efficient Predictor-Based Architecture Search via Non-Uniform Successive Halving
Figure 2 for RANK-NOSH: Efficient Predictor-Based Architecture Search via Non-Uniform Successive Halving
Figure 3 for RANK-NOSH: Efficient Predictor-Based Architecture Search via Non-Uniform Successive Halving
Figure 4 for RANK-NOSH: Efficient Predictor-Based Architecture Search via Non-Uniform Successive Halving
Viaarxiv icon

Rethinking Architecture Selection in Differentiable NAS

Add code
Bookmark button
Alert button
Aug 10, 2021
Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh

Figure 1 for Rethinking Architecture Selection in Differentiable NAS
Figure 2 for Rethinking Architecture Selection in Differentiable NAS
Figure 3 for Rethinking Architecture Selection in Differentiable NAS
Figure 4 for Rethinking Architecture Selection in Differentiable NAS
Viaarxiv icon

When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations

Add code
Bookmark button
Alert button
Jun 03, 2021
Xiangning Chen, Cho-Jui Hsieh, Boqing Gong

Figure 1 for When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations
Figure 2 for When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations
Figure 3 for When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations
Figure 4 for When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations
Viaarxiv icon