Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

RFS: Reinforcement learning with Residual flow steering for dexterous manipulation

Add code
Feb 03, 2026
Viaarxiv icon

Hierarchical Proportion Models for Motion Generation via Integration of Motion Primitives

Add code
Feb 03, 2026
Viaarxiv icon

On the Sample Efficiency of Inverse Dynamics Models for Semi-Supervised Imitation Learning

Add code
Feb 02, 2026
Viaarxiv icon

PRISM: Performer RS-IMLE for Single-pass Multisensory Imitation Learning

Add code
Feb 02, 2026
Viaarxiv icon

Learning-based Initialization of Trajectory Optimization for Path-following Problems of Redundant Manipulators

Add code
Feb 03, 2026
Viaarxiv icon

Towards Exploratory and Focused Manipulation with Bimanual Active Perception: A New Problem, Benchmark and Strategy

Add code
Feb 02, 2026
Viaarxiv icon

Didactic to Constructive: Turning Expert Solutions into Learnable Reasoning

Add code
Feb 02, 2026
Viaarxiv icon

TIC-VLA: A Think-in-Control Vision-Language-Action Model for Robot Navigation in Dynamic Environments

Add code
Feb 02, 2026
Viaarxiv icon

ForSim: Stepwise Forward Simulation for Traffic Policy Fine-Tuning

Add code
Feb 02, 2026
Viaarxiv icon

HumanX: Toward Agile and Generalizable Humanoid Interaction Skills from Human Videos

Add code
Feb 02, 2026
Viaarxiv icon