Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Latent Wasserstein Adversarial Imitation Learning

Add code
Mar 05, 2026
Viaarxiv icon

Task-Relevant and Irrelevant Region-Aware Augmentation for Generalizable Vision-Based Imitation Learning in Agricultural Manipulation

Add code
Mar 05, 2026
Viaarxiv icon

Data-Driven Control of a Magnetically Actuated Fish-Like Robot

Add code
Mar 05, 2026
Viaarxiv icon

SeedPolicy: Horizon Scaling via Self-Evolving Diffusion Policy for Robot Manipulation

Add code
Mar 05, 2026
Viaarxiv icon

RoboPocket: Improve Robot Policies Instantly with Your Phone

Add code
Mar 05, 2026
Viaarxiv icon

VPWEM: Non-Markovian Visuomotor Policy with Working and Episodic Memory

Add code
Mar 05, 2026
Viaarxiv icon

Force-Aware Residual DAgger via Trajectory Editing for Precision Insertion with Impedance Control

Add code
Mar 04, 2026
Viaarxiv icon

IROSA: Interactive Robot Skill Adaptation using Natural Language

Add code
Mar 04, 2026
Viaarxiv icon

Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling

Add code
Mar 04, 2026
Viaarxiv icon

ELLIPSE: Evidential Learning for Robust Waypoints and Uncertainties

Add code
Mar 04, 2026
Viaarxiv icon