Alert button
Picture for Z. Berkay Celik

Z. Berkay Celik

Alert button

Purdue University

Rethinking How to Evaluate Language Model Jailbreak

Add code
Bookmark button
Alert button
Apr 12, 2024
Hongyu Cai, Arjun Arunasalam, Leo Y. Lin, Antonio Bianchi, Z. Berkay Celik

Viaarxiv icon

Take a Look at it! Rethinking How to Evaluate Language Model Jailbreak

Add code
Bookmark button
Alert button
Apr 09, 2024
Hongyu Cai, Arjun Arunasalam, Leo Y. Lin, Antonio Bianchi, Z. Berkay Celik

Viaarxiv icon

Software Engineering for Robotics: Future Research Directions; Report from the 2023 Workshop on Software Engineering for Robotics

Add code
Bookmark button
Alert button
Jan 22, 2024
Claire Le Goues, Sebastian Elbaum, David Anthony, Z. Berkay Celik, Mauricio Castillo-Effen, Nikolaus Correll, Pooyan Jamshidi, Morgan Quigley, Trenton Tabor, Qi Zhu

Viaarxiv icon

Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

Add code
Bookmark button
Alert button
Oct 03, 2023
Yufan Chen, Arjun Arunasalam, Z. Berkay Celik

Figure 1 for Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions
Figure 2 for Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions
Figure 3 for Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions
Figure 4 for Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions
Viaarxiv icon

New Metrics to Evaluate the Performance and Fairness of Personalized Federated Learning

Add code
Bookmark button
Alert button
Jul 28, 2021
Siddharth Divi, Yi-Shan Lin, Habiba Farrukh, Z. Berkay Celik

Figure 1 for New Metrics to Evaluate the Performance and Fairness of Personalized Federated Learning
Figure 2 for New Metrics to Evaluate the Performance and Fairness of Personalized Federated Learning
Figure 3 for New Metrics to Evaluate the Performance and Fairness of Personalized Federated Learning
Figure 4 for New Metrics to Evaluate the Performance and Fairness of Personalized Federated Learning
Viaarxiv icon

What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors

Add code
Bookmark button
Alert button
Sep 22, 2020
Yi-Shan Lin, Wen-Chuan Lee, Z. Berkay Celik

Figure 1 for What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Figure 2 for What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Figure 3 for What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Figure 4 for What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Viaarxiv icon

Real-time Analysis of Privacy-(un)aware IoT Applications

Add code
Bookmark button
Alert button
Nov 24, 2019
Leonardo Babun, Z. Berkay Celik, Patrick McDaniel, A. Selcuk Uluagac

Figure 1 for Real-time Analysis of Privacy-(un)aware IoT Applications
Figure 2 for Real-time Analysis of Privacy-(un)aware IoT Applications
Figure 3 for Real-time Analysis of Privacy-(un)aware IoT Applications
Figure 4 for Real-time Analysis of Privacy-(un)aware IoT Applications
Viaarxiv icon

Detection under Privileged Information

Add code
Bookmark button
Alert button
Mar 31, 2018
Z. Berkay Celik, Patrick McDaniel, Rauf Izmailov, Nicolas Papernot, Ryan Sheatsley, Raquel Alvarez, Ananthram Swami

Figure 1 for Detection under Privileged Information
Figure 2 for Detection under Privileged Information
Figure 3 for Detection under Privileged Information
Figure 4 for Detection under Privileged Information
Viaarxiv icon

Patient-Driven Privacy Control through Generalized Distillation

Add code
Bookmark button
Alert button
Oct 13, 2017
Z. Berkay Celik, David Lopez-Paz, Patrick McDaniel

Figure 1 for Patient-Driven Privacy Control through Generalized Distillation
Figure 2 for Patient-Driven Privacy Control through Generalized Distillation
Figure 3 for Patient-Driven Privacy Control through Generalized Distillation
Figure 4 for Patient-Driven Privacy Control through Generalized Distillation
Viaarxiv icon

Practical Black-Box Attacks against Machine Learning

Add code
Bookmark button
Alert button
Mar 19, 2017
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami

Figure 1 for Practical Black-Box Attacks against Machine Learning
Figure 2 for Practical Black-Box Attacks against Machine Learning
Figure 3 for Practical Black-Box Attacks against Machine Learning
Figure 4 for Practical Black-Box Attacks against Machine Learning
Viaarxiv icon