Alert button
Picture for Pin-Yu Chen

Pin-Yu Chen

Alert button

Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective

Add code
Bookmark button
Alert button
Nov 28, 2023
Ming-Yu Chung, Sheng-Yen Chou, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo, Tsung-Yi Ho

Viaarxiv icon

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift

Add code
Bookmark button
Alert button
Nov 27, 2023
Shengwei An, Sheng-Yen Chou, Kaiyuan Zhang, Qiuling Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng, Shiqing Ma, Pin-Yu Chen, Tsung-Yi Ho, Xiangyu Zhang

Viaarxiv icon

Conditional Modeling Based Automatic Video Summarization

Add code
Bookmark button
Alert button
Nov 20, 2023
Jia-Hong Huang, Chao-Han Huck Yang, Pin-Yu Chen, Min-Hung Chen, Marcel Worring

Viaarxiv icon

On the Convergence and Sample Complexity Analysis of Deep Q-Networks with $ε$-Greedy Exploration

Add code
Bookmark button
Alert button
Oct 24, 2023
Shuai Zhang, Hongkang Li, Meng Wang, Miao Liu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Keerthiram Murugesan, Subhajit Chaudhury

Viaarxiv icon

HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models

Add code
Bookmark button
Alert button
Oct 16, 2023
Chen Chen, Yuchen Hu, Chao-Han Huck Yang, Sabato Macro Siniscalchi, Pin-Yu Chen, Eng Siong Chng

Figure 1 for HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models
Figure 2 for HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models
Figure 3 for HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models
Figure 4 for HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models
Viaarxiv icon

Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?

Add code
Bookmark button
Alert button
Oct 16, 2023
Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang

Viaarxiv icon

AutoVP: An Automated Visual Prompting Framework and Benchmark

Add code
Bookmark button
Alert button
Oct 12, 2023
Hsi-Ai Tsao, Lei Hsiung, Pin-Yu Chen, Sijia Liu, Tsung-Yi Ho

Viaarxiv icon

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

Add code
Bookmark button
Alert button
Oct 05, 2023
Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson

Figure 1 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Figure 2 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Figure 3 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Figure 4 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Viaarxiv icon

Time-LLM: Time Series Forecasting by Reprogramming Large Language Models

Add code
Bookmark button
Alert button
Oct 03, 2023
Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, Qingsong Wen

Figure 1 for Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
Figure 2 for Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
Figure 3 for Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
Figure 4 for Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
Viaarxiv icon