Alert button
Picture for Pin-Yu Chen

Pin-Yu Chen

Alert button

Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation?

Add code
Bookmark button
Alert button
Dec 16, 2022
Ming-Chang Chiu, Pin-Yu Chen, Xuezhe Ma

Figure 1 for Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation?
Figure 2 for Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation?
Figure 3 for Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation?
Figure 4 for Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation?
Viaarxiv icon

How to Backdoor Diffusion Models?

Add code
Bookmark button
Alert button
Dec 11, 2022
Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho

Figure 1 for How to Backdoor Diffusion Models?
Figure 2 for How to Backdoor Diffusion Models?
Figure 3 for How to Backdoor Diffusion Models?
Figure 4 for How to Backdoor Diffusion Models?
Viaarxiv icon

When Neural Networks Fail to Generalize? A Model Sensitivity Perspective

Add code
Bookmark button
Alert button
Dec 01, 2022
Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, Pingkun Yan

Figure 1 for When Neural Networks Fail to Generalize? A Model Sensitivity Perspective
Figure 2 for When Neural Networks Fail to Generalize? A Model Sensitivity Perspective
Figure 3 for When Neural Networks Fail to Generalize? A Model Sensitivity Perspective
Figure 4 for When Neural Networks Fail to Generalize? A Model Sensitivity Perspective
Viaarxiv icon

NCTV: Neural Clamping Toolkit and Visualization for Neural Network Calibration

Add code
Bookmark button
Alert button
Nov 29, 2022
Lei Hsiung, Yung-Chen Tang, Pin-Yu Chen, Tsung-Yi Ho

Figure 1 for NCTV: Neural Clamping Toolkit and Visualization for Neural Network Calibration
Figure 2 for NCTV: Neural Clamping Toolkit and Visualization for Neural Network Calibration
Figure 3 for NCTV: Neural Clamping Toolkit and Visualization for Neural Network Calibration
Figure 4 for NCTV: Neural Clamping Toolkit and Visualization for Neural Network Calibration
Viaarxiv icon

Understanding and Improving Visual Prompting: A Label-Mapping Perspective

Add code
Bookmark button
Alert button
Nov 21, 2022
Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, Sijia Liu

Figure 1 for Understanding and Improving Visual Prompting: A Label-Mapping Perspective
Figure 2 for Understanding and Improving Visual Prompting: A Label-Mapping Perspective
Figure 3 for Understanding and Improving Visual Prompting: A Label-Mapping Perspective
Figure 4 for Understanding and Improving Visual Prompting: A Label-Mapping Perspective
Viaarxiv icon

Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming

Add code
Bookmark button
Alert button
Nov 02, 2022
Yun-Ning Hung, Chao-Han Huck Yang, Pin-Yu Chen, Alexander Lerch

Figure 1 for Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming
Figure 2 for Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming
Figure 3 for Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming
Figure 4 for Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming
Viaarxiv icon

Inference and Denoise: Causal Inference-based Neural Speech Enhancement

Add code
Bookmark button
Alert button
Nov 02, 2022
Tsun-An Hsieh, Chao-Han Huck Yang, Pin-Yu Chen, Sabato Marco Siniscalchi, Yu Tsao

Figure 1 for Inference and Denoise: Causal Inference-based Neural Speech Enhancement
Figure 2 for Inference and Denoise: Causal Inference-based Neural Speech Enhancement
Figure 3 for Inference and Denoise: Causal Inference-based Neural Speech Enhancement
Figure 4 for Inference and Denoise: Causal Inference-based Neural Speech Enhancement
Viaarxiv icon

Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

Add code
Bookmark button
Alert button
Nov 02, 2022
Jhih-Cing Huang, Yu-Lin Tsai, Chao-Han Huck Yang, Cheng-Fang Su, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo

Figure 1 for Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise
Figure 2 for Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise
Viaarxiv icon

An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-driven Molecule Optimization

Add code
Bookmark button
Alert button
Oct 27, 2022
Elvin Lo, Pin-Yu Chen

Figure 1 for An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-driven Molecule Optimization
Figure 2 for An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-driven Molecule Optimization
Figure 3 for An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-driven Molecule Optimization
Figure 4 for An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-driven Molecule Optimization
Viaarxiv icon

FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning

Add code
Bookmark button
Alert button
Oct 23, 2022
Kaiyuan Zhang, Guanhong Tao, Qiuling Xu, Siyuan Cheng, Shengwei An, Yingqi Liu, Shiwei Feng, Guangyu Shen, Pin-Yu Chen, Shiqing Ma, Xiangyu Zhang

Figure 1 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 2 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 3 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 4 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Viaarxiv icon