Abstract:Animals achieve energy-efficient locomotion by their implicit passive dynamics, a marvel that has captivated roboticists for decades.Recently, methods incorporated Adversarial Motion Prior (AMP) and Reinforcement learning (RL) shows promising progress to replicate Animals' naturalistic motion. However, such imitation learning approaches predominantly capture explicit kinematic patterns, so-called gaits, while overlooking the implicit passive dynamics. This work bridges this gap by incorporating a reward term guided by Impact Mitigation Factor (IMF), a physics-informed metric that quantifies a robot's ability to passively mitigate impacts. By integrating IMF with AMP, our approach enables RL policies to learn both explicit motion trajectories from animal reference motion and the implicit passive dynamic. We demonstrate energy efficiency improvements of up to 32%, as measured by the Cost of Transport (CoT), across both AMP and handcrafted reward structure.
Abstract:Prompted by a recent experiment by Victor Haghani and Richard Dewey, this note generalises the Kelly strategy (optimal for simple investment games with log utility) to a large class of practical utility functions and including the effect of extraneous wealth. A counterintuitive result is proved : for any continuous, concave, differentiable utility function, the optimal choice at every point depends only on the probability of reaching that point. The practical calculation of the optimal action at every stage is made possible through use of the binomial expansion, reducing the problem size from exponential to quadratic. Applications include (better) automatic investing and risk taking under uncertainty.