Alert button
Picture for Stanislav Fort

Stanislav Fort

Alert button

Scaling Laws for Adversarial Attacks on Language Model Activations

Add code
Bookmark button
Alert button
Dec 05, 2023
Stanislav Fort

Viaarxiv icon

Multi-attacks: Many images $+$ the same adversarial attack $\to$ many target labels

Add code
Bookmark button
Alert button
Aug 04, 2023
Stanislav Fort

Figure 1 for Multi-attacks: Many images $+$ the same adversarial attack $\to$ many target labels
Figure 2 for Multi-attacks: Many images $+$ the same adversarial attack $\to$ many target labels
Figure 3 for Multi-attacks: Many images $+$ the same adversarial attack $\to$ many target labels
Figure 4 for Multi-attacks: Many images $+$ the same adversarial attack $\to$ many target labels
Viaarxiv icon

Constitutional AI: Harmlessness from AI Feedback

Add code
Bookmark button
Alert button
Dec 15, 2022
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, Jared Kaplan

Figure 1 for Constitutional AI: Harmlessness from AI Feedback
Figure 2 for Constitutional AI: Harmlessness from AI Feedback
Figure 3 for Constitutional AI: Harmlessness from AI Feedback
Figure 4 for Constitutional AI: Harmlessness from AI Feedback
Viaarxiv icon

Measuring Progress on Scalable Oversight for Large Language Models

Add code
Bookmark button
Alert button
Nov 11, 2022
Samuel R. Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamilė Lukošiūtė, Amanda Askell, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Christopher Olah, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, Jackson Kernion, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Liane Lovitt, Nelson Elhage, Nicholas Schiefer, Nicholas Joseph, Noemí Mercado, Nova DasSarma, Robin Larson, Sam McCandlish, Sandipan Kundu, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Timothy Telleen-Lawton, Tom Brown, Tom Henighan, Tristan Hume, Yuntao Bai, Zac Hatfield-Dodds, Ben Mann, Jared Kaplan

Figure 1 for Measuring Progress on Scalable Oversight for Large Language Models
Figure 2 for Measuring Progress on Scalable Oversight for Large Language Models
Figure 3 for Measuring Progress on Scalable Oversight for Large Language Models
Viaarxiv icon

What does a deep neural network confidently perceive? The effective dimension of high certainty class manifolds and their low confidence boundaries

Add code
Bookmark button
Alert button
Oct 11, 2022
Stanislav Fort, Ekin Dogus Cubuk, Surya Ganguli, Samuel S. Schoenholz

Figure 1 for What does a deep neural network confidently perceive? The effective dimension of high certainty class manifolds and their low confidence boundaries
Figure 2 for What does a deep neural network confidently perceive? The effective dimension of high certainty class manifolds and their low confidence boundaries
Figure 3 for What does a deep neural network confidently perceive? The effective dimension of high certainty class manifolds and their low confidence boundaries
Figure 4 for What does a deep neural network confidently perceive? The effective dimension of high certainty class manifolds and their low confidence boundaries
Viaarxiv icon

Language Models (Mostly) Know What They Know

Add code
Bookmark button
Alert button
Jul 16, 2022
Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, Jared Kaplan

Figure 1 for Language Models (Mostly) Know What They Know
Figure 2 for Language Models (Mostly) Know What They Know
Figure 3 for Language Models (Mostly) Know What They Know
Figure 4 for Language Models (Mostly) Know What They Know
Viaarxiv icon

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

Add code
Bookmark button
Alert button
Apr 12, 2022
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, Jared Kaplan

Figure 1 for Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Figure 2 for Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Figure 3 for Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Figure 4 for Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Viaarxiv icon

Adversarial vulnerability of powerful near out-of-distribution detection

Add code
Bookmark button
Alert button
Jan 18, 2022
Stanislav Fort

Viaarxiv icon

How many degrees of freedom do we need to train deep networks: a loss landscape perspective

Add code
Bookmark button
Alert button
Jul 13, 2021
Brett W. Larsen, Stanislav Fort, Nic Becker, Surya Ganguli

Figure 1 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Figure 2 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Figure 3 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Figure 4 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Viaarxiv icon