Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

StageACT: Stage-Conditioned Imitation for Robust Humanoid Door Opening

Add code
Sep 16, 2025
Viaarxiv icon

Robust Online Residual Refinement via Koopman-Guided Dynamics Modeling

Add code
Sep 16, 2025
Viaarxiv icon

Towards Context-Aware Human-like Pointing Gestures with RL Motion Imitation

Add code
Sep 16, 2025
Viaarxiv icon

ActiveVLN: Towards Active Exploration via Multi-Turn RL in Vision-and-Language Navigation

Add code
Sep 16, 2025
Viaarxiv icon

Learning to Generate Pointing Gestures in Situated Embodied Conversational Agents

Add code
Sep 15, 2025
Viaarxiv icon

Large Language Models Imitate Logical Reasoning, but at what Cost?

Add code
Sep 16, 2025
Viaarxiv icon

JANUS: A Dual-Constraint Generative Framework for Stealthy Node Injection Attacks

Add code
Sep 16, 2025
Viaarxiv icon

Input-gated Bilateral Teleoperation: An Easy-to-implement Force Feedback Teleoperation Method for Low-cost Hardware

Add code
Sep 10, 2025
Viaarxiv icon

PegasusFlow: Parallel Rolling-Denoising Score Sampling for Robot Diffusion Planner Flow Matching

Add code
Sep 10, 2025
Viaarxiv icon

Grasp Like Humans: Learning Generalizable Multi-Fingered Grasping from Human Proprioceptive Sensorimotor Integration

Add code
Sep 10, 2025
Viaarxiv icon