Picture for Filippo Lazzati

Filippo Lazzati

Reward Compatibility: A Framework for Inverse RL

Add code
Jan 14, 2025
Figure 1 for Reward Compatibility: A Framework for Inverse RL
Figure 2 for Reward Compatibility: A Framework for Inverse RL
Figure 3 for Reward Compatibility: A Framework for Inverse RL
Figure 4 for Reward Compatibility: A Framework for Inverse RL
Viaarxiv icon

On the Partial Identifiability in Reward Learning: Choosing the Best Reward

Add code
Jan 10, 2025
Figure 1 for On the Partial Identifiability in Reward Learning: Choosing the Best Reward
Figure 2 for On the Partial Identifiability in Reward Learning: Choosing the Best Reward
Figure 3 for On the Partial Identifiability in Reward Learning: Choosing the Best Reward
Figure 4 for On the Partial Identifiability in Reward Learning: Choosing the Best Reward
Viaarxiv icon

Learning Utilities from Demonstrations in Markov Decision Processes

Add code
Sep 25, 2024
Viaarxiv icon

How to Scale Inverse RL to Large State Spaces? A Provably Efficient Approach

Add code
Jun 06, 2024
Figure 1 for How to Scale Inverse RL to Large State Spaces? A Provably Efficient Approach
Figure 2 for How to Scale Inverse RL to Large State Spaces? A Provably Efficient Approach
Figure 3 for How to Scale Inverse RL to Large State Spaces? A Provably Efficient Approach
Figure 4 for How to Scale Inverse RL to Large State Spaces? A Provably Efficient Approach
Viaarxiv icon

Offline Inverse RL: New Solution Concepts and Provably Efficient Algorithms

Add code
Feb 23, 2024
Figure 1 for Offline Inverse RL: New Solution Concepts and Provably Efficient Algorithms
Figure 2 for Offline Inverse RL: New Solution Concepts and Provably Efficient Algorithms
Viaarxiv icon

Towards Theoretical Understanding of Inverse Reinforcement Learning

Add code
Apr 25, 2023
Viaarxiv icon