Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Value from Observations: Towards Large-Scale Imitation Learning via Self-Improvement

Add code
Jul 09, 2025
Viaarxiv icon

Learning safe, constrained policies via imitation learning: Connection to Probabilistic Inference and a Naive Algorithm

Add code
Jul 09, 2025
Viaarxiv icon

Spatial-Temporal Aware Visuomotor Diffusion Policy Learning

Add code
Jul 09, 2025
Viaarxiv icon

Fast Bilateral Teleoperation and Imitation Learning Using Sensorless Force Control via Accurate Dynamics Model

Add code
Jul 08, 2025
Viaarxiv icon

Learning to Evaluate Autonomous Behaviour in Human-Robot Interaction

Add code
Jul 08, 2025
Viaarxiv icon

EC-Flow: Enabling Versatile Robotic Manipulation from Action-Unlabeled Videos via Embodiment-Centric Flow

Add code
Jul 08, 2025
Viaarxiv icon

TriVLA: A Triple-System-Based Unified Vision-Language-Action Model for General Robot Control

Add code
Jul 03, 2025
Viaarxiv icon

Imitation Learning for Satellite Attitude Control under Unknown Perturbations

Add code
Jul 01, 2025
Viaarxiv icon

Towards Bio-Inspired Robotic Trajectory Planning via Self-Supervised RNN

Add code
Jul 02, 2025
Viaarxiv icon

TypeTele: Releasing Dexterity in Teleoperation by Dexterous Manipulation Types

Add code
Jul 02, 2025
Viaarxiv icon