Picture for Daniel Paleka

Daniel Paleka

Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition

Jun 12, 2024
Viaarxiv icon

Foundational Challenges in Assuring Alignment and Safety of Large Language Models

Add code
Apr 15, 2024
Viaarxiv icon

ARB: Advanced Reasoning Benchmark for Large Language Models

Jul 28, 2023
Figure 1 for ARB: Advanced Reasoning Benchmark for Large Language Models
Figure 2 for ARB: Advanced Reasoning Benchmark for Large Language Models
Figure 3 for ARB: Advanced Reasoning Benchmark for Large Language Models
Figure 4 for ARB: Advanced Reasoning Benchmark for Large Language Models
Viaarxiv icon

Evaluating Superhuman Models with Consistency Checks

Add code
Jun 19, 2023
Figure 1 for Evaluating Superhuman Models with Consistency Checks
Figure 2 for Evaluating Superhuman Models with Consistency Checks
Figure 3 for Evaluating Superhuman Models with Consistency Checks
Figure 4 for Evaluating Superhuman Models with Consistency Checks
Viaarxiv icon

Poisoning Web-Scale Training Datasets is Practical

Add code
Feb 20, 2023
Figure 1 for Poisoning Web-Scale Training Datasets is Practical
Figure 2 for Poisoning Web-Scale Training Datasets is Practical
Figure 3 for Poisoning Web-Scale Training Datasets is Practical
Figure 4 for Poisoning Web-Scale Training Datasets is Practical
Viaarxiv icon

Red-Teaming the Stable Diffusion Safety Filter

Add code
Oct 11, 2022
Figure 1 for Red-Teaming the Stable Diffusion Safety Filter
Figure 2 for Red-Teaming the Stable Diffusion Safety Filter
Figure 3 for Red-Teaming the Stable Diffusion Safety Filter
Figure 4 for Red-Teaming the Stable Diffusion Safety Filter
Viaarxiv icon

A law of adversarial risk, interpolation, and label noise

Jul 08, 2022
Figure 1 for A law of adversarial risk, interpolation, and label noise
Figure 2 for A law of adversarial risk, interpolation, and label noise
Figure 3 for A law of adversarial risk, interpolation, and label noise
Figure 4 for A law of adversarial risk, interpolation, and label noise
Viaarxiv icon