Alert button
Picture for Markus Kunesch

Markus Kunesch

Alert button

Doing the right thing for the right reason: Evaluating artificial moral cognition by probing cost insensitivity

May 29, 2023
Yiran Mao, Madeline G. Reinecke, Markus Kunesch, Edgar A. Duéñez-Guzmán, Ramona Comanescu, Julia Haas, Joel Z. Leibo

Figure 1 for Doing the right thing for the right reason: Evaluating artificial moral cognition by probing cost insensitivity
Figure 2 for Doing the right thing for the right reason: Evaluating artificial moral cognition by probing cost insensitivity
Figure 3 for Doing the right thing for the right reason: Evaluating artificial moral cognition by probing cost insensitivity
Viaarxiv icon

Beyond Bayes-optimality: meta-learning what you know you don't know

Oct 12, 2022
Jordi Grau-Moya, Grégoire Delétang, Markus Kunesch, Tim Genewein, Elliot Catt, Kevin Li, Anian Ruoss, Chris Cundy, Joel Veness, Jane Wang, Marcus Hutter, Christopher Summerfield, Shane Legg, Pedro Ortega

Figure 1 for Beyond Bayes-optimality: meta-learning what you know you don't know
Figure 2 for Beyond Bayes-optimality: meta-learning what you know you don't know
Figure 3 for Beyond Bayes-optimality: meta-learning what you know you don't know
Figure 4 for Beyond Bayes-optimality: meta-learning what you know you don't know
Viaarxiv icon

Your Policy Regularizer is Secretly an Adversary

Apr 01, 2022
Rob Brekelmans, Tim Genewein, Jordi Grau-Moya, Grégoire Delétang, Markus Kunesch, Shane Legg, Pedro Ortega

Figure 1 for Your Policy Regularizer is Secretly an Adversary
Figure 2 for Your Policy Regularizer is Secretly an Adversary
Figure 3 for Your Policy Regularizer is Secretly an Adversary
Figure 4 for Your Policy Regularizer is Secretly an Adversary
Viaarxiv icon

Model-Free Risk-Sensitive Reinforcement Learning

Nov 04, 2021
Grégoire Delétang, Jordi Grau-Moya, Markus Kunesch, Tim Genewein, Rob Brekelmans, Shane Legg, Pedro A. Ortega

Figure 1 for Model-Free Risk-Sensitive Reinforcement Learning
Figure 2 for Model-Free Risk-Sensitive Reinforcement Learning
Figure 3 for Model-Free Risk-Sensitive Reinforcement Learning
Figure 4 for Model-Free Risk-Sensitive Reinforcement Learning
Viaarxiv icon

Shaking the foundations: delusions in sequence models for interaction and control

Oct 20, 2021
Pedro A. Ortega, Markus Kunesch, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Joel Veness, Jonas Buchli, Jonas Degrave, Bilal Piot, Julien Perolat, Tom Everitt, Corentin Tallec, Emilio Parisotto, Tom Erez, Yutian Chen, Scott Reed, Marcus Hutter, Nando de Freitas, Shane Legg

Figure 1 for Shaking the foundations: delusions in sequence models for interaction and control
Figure 2 for Shaking the foundations: delusions in sequence models for interaction and control
Figure 3 for Shaking the foundations: delusions in sequence models for interaction and control
Figure 4 for Shaking the foundations: delusions in sequence models for interaction and control
Viaarxiv icon

Causal Analysis of Agent Behavior for AI Safety

Mar 05, 2021
Grégoire Déletang, Jordi Grau-Moya, Miljan Martic, Tim Genewein, Tom McGrath, Vladimir Mikulik, Markus Kunesch, Shane Legg, Pedro A. Ortega

Figure 1 for Causal Analysis of Agent Behavior for AI Safety
Figure 2 for Causal Analysis of Agent Behavior for AI Safety
Figure 3 for Causal Analysis of Agent Behavior for AI Safety
Figure 4 for Causal Analysis of Agent Behavior for AI Safety
Viaarxiv icon

Human-interpretable model explainability on high-dimensional data

Oct 14, 2020
Damien de Mijolla, Christopher Frye, Markus Kunesch, John Mansir, Ilya Feige

Figure 1 for Human-interpretable model explainability on high-dimensional data
Figure 2 for Human-interpretable model explainability on high-dimensional data
Figure 3 for Human-interpretable model explainability on high-dimensional data
Figure 4 for Human-interpretable model explainability on high-dimensional data
Viaarxiv icon