Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Imitation Learning in the Deep Learning Era: A Novel Taxonomy and Recent Advances

Add code
Nov 05, 2025
Viaarxiv icon

Going Beyond Expert Performance via Deep Implicit Imitation Reinforcement Learning

Add code
Nov 05, 2025
Viaarxiv icon

Unified Multimodal Diffusion Forcing for Forceful Manipulation

Add code
Nov 06, 2025
Viaarxiv icon

GraSP-VLA: Graph-based Symbolic Action Representation for Long-Horizon Planning with VLA Policies

Add code
Nov 06, 2025
Viaarxiv icon

Isaac Lab: A GPU-Accelerated Simulation Framework for Multi-Modal Robot Learning

Add code
Nov 06, 2025
Viaarxiv icon

When AI Trading Agents Compete: Adverse Selection of Meta-Orders by Reinforcement Learning-Based Market Making

Add code
Oct 31, 2025
Viaarxiv icon

Hybrid Consistency Policy: Decoupling Multi-Modal Diversity and Real-Time Efficiency in Robotic Manipulation

Add code
Oct 30, 2025
Viaarxiv icon

Learning to Manage Investment Portfolios beyond Simple Utility Functions

Add code
Oct 30, 2025
Viaarxiv icon

Beyond Imitation: Constraint-Aware Trajectory Generation with Flow Matching For End-to-End Autonomous Driving

Add code
Oct 30, 2025
Viaarxiv icon

Human-in-the-loop Online Rejection Sampling for Robotic Manipulation

Add code
Oct 30, 2025
Viaarxiv icon