Alert button
Picture for Enda Howley

Enda Howley

Alert button

Utility-Based Reinforcement Learning: Unifying Single-objective and Multi-objective Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 05, 2024
Peter Vamplew, Cameron Foale, Conor F. Hayes, Patrick Mannion, Enda Howley, Richard Dazeley, Scott Johnson, Johan Källström, Gabriel Ramos, Roxana Rădulescu, Willem Röpke, Diederik M. Roijers

Viaarxiv icon

ADT: Agent-based Dynamic Thresholding for Anomaly Detection

Add code
Bookmark button
Alert button
Dec 03, 2023
Xue Yang, Enda Howley, Micheal Schukat

Viaarxiv icon

Distributional Multi-Objective Decision Making

Add code
Bookmark button
Alert button
May 19, 2023
Willem Röpke, Conor F. Hayes, Patrick Mannion, Enda Howley, Ann Nowé, Diederik M. Roijers

Figure 1 for Distributional Multi-Objective Decision Making
Figure 2 for Distributional Multi-Objective Decision Making
Figure 3 for Distributional Multi-Objective Decision Making
Figure 4 for Distributional Multi-Objective Decision Making
Viaarxiv icon

Monte Carlo Tree Search Algorithms for Risk-Aware and Multi-Objective Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 06, 2022
Conor F. Hayes, Mathieu Reymond, Diederik M. Roijers, Enda Howley, Patrick Mannion

Figure 1 for Monte Carlo Tree Search Algorithms for Risk-Aware and Multi-Objective Reinforcement Learning
Figure 2 for Monte Carlo Tree Search Algorithms for Risk-Aware and Multi-Objective Reinforcement Learning
Figure 3 for Monte Carlo Tree Search Algorithms for Risk-Aware and Multi-Objective Reinforcement Learning
Figure 4 for Monte Carlo Tree Search Algorithms for Risk-Aware and Multi-Objective Reinforcement Learning
Viaarxiv icon

Multi-Objective Coordination Graphs for the Expected Scalarised Returns with Generative Flow Models

Add code
Bookmark button
Alert button
Jul 01, 2022
Conor F. Hayes, Timothy Verstraeten, Diederik M. Roijers, Enda Howley, Patrick Mannion

Figure 1 for Multi-Objective Coordination Graphs for the Expected Scalarised Returns with Generative Flow Models
Figure 2 for Multi-Objective Coordination Graphs for the Expected Scalarised Returns with Generative Flow Models
Figure 3 for Multi-Objective Coordination Graphs for the Expected Scalarised Returns with Generative Flow Models
Figure 4 for Multi-Objective Coordination Graphs for the Expected Scalarised Returns with Generative Flow Models
Viaarxiv icon

Exploring the Pareto front of multi-objective COVID-19 mitigation policies using reinforcement learning

Add code
Bookmark button
Alert button
Apr 11, 2022
Mathieu Reymond, Conor F. Hayes, Lander Willem, Roxana Rădulescu, Steven Abrams, Diederik M. Roijers, Enda Howley, Patrick Mannion, Niel Hens, Ann Nowé, Pieter Libin

Figure 1 for Exploring the Pareto front of multi-objective COVID-19 mitigation policies using reinforcement learning
Figure 2 for Exploring the Pareto front of multi-objective COVID-19 mitigation policies using reinforcement learning
Figure 3 for Exploring the Pareto front of multi-objective COVID-19 mitigation policies using reinforcement learning
Figure 4 for Exploring the Pareto front of multi-objective COVID-19 mitigation policies using reinforcement learning
Viaarxiv icon

Expected Scalarised Returns Dominance: A New Solution Concept for Multi-Objective Decision Making

Add code
Bookmark button
Alert button
Jun 02, 2021
Conor F. Hayes, Timothy Verstraeten, Diederik M. Roijers, Enda Howley, Patrick Mannion

Figure 1 for Expected Scalarised Returns Dominance: A New Solution Concept for Multi-Objective Decision Making
Figure 2 for Expected Scalarised Returns Dominance: A New Solution Concept for Multi-Objective Decision Making
Figure 3 for Expected Scalarised Returns Dominance: A New Solution Concept for Multi-Objective Decision Making
Figure 4 for Expected Scalarised Returns Dominance: A New Solution Concept for Multi-Objective Decision Making
Viaarxiv icon

A Practical Guide to Multi-Objective Reinforcement Learning and Planning

Add code
Bookmark button
Alert button
Mar 17, 2021
Conor F. Hayes, Roxana Rădulescu, Eugenio Bargiacchi, Johan Källström, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten, Luisa M. Zintgraf, Richard Dazeley, Fredrik Heintz, Enda Howley, Athirai A. Irissappane, Patrick Mannion, Ann Nowé, Gabriel Ramos, Marcello Restelli, Peter Vamplew, Diederik M. Roijers

Figure 1 for A Practical Guide to Multi-Objective Reinforcement Learning and Planning
Figure 2 for A Practical Guide to Multi-Objective Reinforcement Learning and Planning
Figure 3 for A Practical Guide to Multi-Objective Reinforcement Learning and Planning
Figure 4 for A Practical Guide to Multi-Objective Reinforcement Learning and Planning
Viaarxiv icon