Alert button
Picture for Peter Vamplew

Peter Vamplew

Alert button

Value function interference and greedy action selection in value-based multi-objective reinforcement learning

Add code
Bookmark button
Alert button
Feb 09, 2024
Peter Vamplew, Cameron Foale, Richard Dazeley

Viaarxiv icon

Utility-Based Reinforcement Learning: Unifying Single-objective and Multi-objective Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 05, 2024
Peter Vamplew, Cameron Foale, Conor F. Hayes, Patrick Mannion, Enda Howley, Richard Dazeley, Scott Johnson, Johan Källström, Gabriel Ramos, Roxana Rădulescu, Willem Röpke, Diederik M. Roijers

Viaarxiv icon

An Empirical Investigation of Value-Based Multi-objective Reinforcement Learning for Stochastic Environments

Add code
Bookmark button
Alert button
Jan 06, 2024
Kewen Ding, Peter Vamplew, Cameron Foale, Richard Dazeley

Viaarxiv icon

Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety

Add code
Bookmark button
Alert button
May 30, 2023
Catalin Mitelut, Ben Smith, Peter Vamplew

Figure 1 for Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety
Figure 2 for Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety
Figure 3 for Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety
Figure 4 for Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety
Viaarxiv icon

Broad-persistent Advice for Interactive Reinforcement Learning Scenarios

Add code
Bookmark button
Alert button
Oct 11, 2022
Francisco Cruz, Adam Bignold, Hung Son Nguyen, Richard Dazeley, Peter Vamplew

Figure 1 for Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
Figure 2 for Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
Figure 3 for Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
Figure 4 for Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
Viaarxiv icon

Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks

Add code
Bookmark button
Alert button
Oct 07, 2022
Adrian Ly, Richard Dazeley, Peter Vamplew, Francisco Cruz, Sunil Aryal

Figure 1 for Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks
Figure 2 for Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks
Figure 3 for Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks
Figure 4 for Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks
Viaarxiv icon

Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios

Add code
Bookmark button
Alert button
Jul 07, 2022
Francisco Cruz, Charlotte Young, Richard Dazeley, Peter Vamplew

Figure 1 for Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios
Figure 2 for Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios
Figure 3 for Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios
Figure 4 for Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios
Viaarxiv icon

Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey

Add code
Bookmark button
Alert button
Aug 20, 2021
Richard Dazeley, Peter Vamplew, Francisco Cruz

Figure 1 for Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Figure 2 for Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Figure 3 for Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Figure 4 for Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Viaarxiv icon

Levels of explainable artificial intelligence for human-aligned conversational explanations

Add code
Bookmark button
Alert button
Jul 07, 2021
Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal, Francisco Cruz

Figure 1 for Levels of explainable artificial intelligence for human-aligned conversational explanations
Figure 2 for Levels of explainable artificial intelligence for human-aligned conversational explanations
Figure 3 for Levels of explainable artificial intelligence for human-aligned conversational explanations
Figure 4 for Levels of explainable artificial intelligence for human-aligned conversational explanations
Viaarxiv icon

A Practical Guide to Multi-Objective Reinforcement Learning and Planning

Add code
Bookmark button
Alert button
Mar 17, 2021
Conor F. Hayes, Roxana Rădulescu, Eugenio Bargiacchi, Johan Källström, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten, Luisa M. Zintgraf, Richard Dazeley, Fredrik Heintz, Enda Howley, Athirai A. Irissappane, Patrick Mannion, Ann Nowé, Gabriel Ramos, Marcello Restelli, Peter Vamplew, Diederik M. Roijers

Figure 1 for A Practical Guide to Multi-Objective Reinforcement Learning and Planning
Figure 2 for A Practical Guide to Multi-Objective Reinforcement Learning and Planning
Figure 3 for A Practical Guide to Multi-Objective Reinforcement Learning and Planning
Figure 4 for A Practical Guide to Multi-Objective Reinforcement Learning and Planning
Viaarxiv icon