Alert button
Picture for Markus Kunesch

Markus Kunesch

Alert button

Doing the right thing for the right reason: Evaluating artificial moral cognition by probing cost insensitivity

Add code
Bookmark button
Alert button
May 29, 2023
Yiran Mao, Madeline G. Reinecke, Markus Kunesch, Edgar A. Duéñez-Guzmán, Ramona Comanescu, Julia Haas, Joel Z. Leibo

Figure 1 for Doing the right thing for the right reason: Evaluating artificial moral cognition by probing cost insensitivity
Figure 2 for Doing the right thing for the right reason: Evaluating artificial moral cognition by probing cost insensitivity
Figure 3 for Doing the right thing for the right reason: Evaluating artificial moral cognition by probing cost insensitivity
Viaarxiv icon

Beyond Bayes-optimality: meta-learning what you know you don't know

Add code
Bookmark button
Alert button
Oct 12, 2022
Jordi Grau-Moya, Grégoire Delétang, Markus Kunesch, Tim Genewein, Elliot Catt, Kevin Li, Anian Ruoss, Chris Cundy, Joel Veness, Jane Wang, Marcus Hutter, Christopher Summerfield, Shane Legg, Pedro Ortega

Figure 1 for Beyond Bayes-optimality: meta-learning what you know you don't know
Figure 2 for Beyond Bayes-optimality: meta-learning what you know you don't know
Figure 3 for Beyond Bayes-optimality: meta-learning what you know you don't know
Figure 4 for Beyond Bayes-optimality: meta-learning what you know you don't know
Viaarxiv icon

Your Policy Regularizer is Secretly an Adversary

Add code
Bookmark button
Alert button
Apr 01, 2022
Rob Brekelmans, Tim Genewein, Jordi Grau-Moya, Grégoire Delétang, Markus Kunesch, Shane Legg, Pedro Ortega

Figure 1 for Your Policy Regularizer is Secretly an Adversary
Figure 2 for Your Policy Regularizer is Secretly an Adversary
Figure 3 for Your Policy Regularizer is Secretly an Adversary
Figure 4 for Your Policy Regularizer is Secretly an Adversary
Viaarxiv icon

Model-Free Risk-Sensitive Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 04, 2021
Grégoire Delétang, Jordi Grau-Moya, Markus Kunesch, Tim Genewein, Rob Brekelmans, Shane Legg, Pedro A. Ortega

Figure 1 for Model-Free Risk-Sensitive Reinforcement Learning
Figure 2 for Model-Free Risk-Sensitive Reinforcement Learning
Figure 3 for Model-Free Risk-Sensitive Reinforcement Learning
Figure 4 for Model-Free Risk-Sensitive Reinforcement Learning
Viaarxiv icon

Shaking the foundations: delusions in sequence models for interaction and control

Add code
Bookmark button
Alert button
Oct 20, 2021
Pedro A. Ortega, Markus Kunesch, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Joel Veness, Jonas Buchli, Jonas Degrave, Bilal Piot, Julien Perolat, Tom Everitt, Corentin Tallec, Emilio Parisotto, Tom Erez, Yutian Chen, Scott Reed, Marcus Hutter, Nando de Freitas, Shane Legg

Figure 1 for Shaking the foundations: delusions in sequence models for interaction and control
Figure 2 for Shaking the foundations: delusions in sequence models for interaction and control
Figure 3 for Shaking the foundations: delusions in sequence models for interaction and control
Figure 4 for Shaking the foundations: delusions in sequence models for interaction and control
Viaarxiv icon

Causal Analysis of Agent Behavior for AI Safety

Add code
Bookmark button
Alert button
Mar 05, 2021
Grégoire Déletang, Jordi Grau-Moya, Miljan Martic, Tim Genewein, Tom McGrath, Vladimir Mikulik, Markus Kunesch, Shane Legg, Pedro A. Ortega

Figure 1 for Causal Analysis of Agent Behavior for AI Safety
Figure 2 for Causal Analysis of Agent Behavior for AI Safety
Figure 3 for Causal Analysis of Agent Behavior for AI Safety
Figure 4 for Causal Analysis of Agent Behavior for AI Safety
Viaarxiv icon

Human-interpretable model explainability on high-dimensional data

Add code
Bookmark button
Alert button
Oct 14, 2020
Damien de Mijolla, Christopher Frye, Markus Kunesch, John Mansir, Ilya Feige

Figure 1 for Human-interpretable model explainability on high-dimensional data
Figure 2 for Human-interpretable model explainability on high-dimensional data
Figure 3 for Human-interpretable model explainability on high-dimensional data
Figure 4 for Human-interpretable model explainability on high-dimensional data
Viaarxiv icon