Abstract:Humanoid robots operating in human-centered environments (e.g., homes, hospitals, and offices) must mitigate foot--ground impact transients, as impact-induced vibration and noise degrade user experience and repeated impacts accelerate hardware wear. However, existing low-noise locomotion training often relies on kinematic proxy objectives or fragile force sensors, and footwear-induced changes in contact dynamics introduce distribution shifts that hinder policy generalization.We present QuietWalk, a physics-informed reinforcement learning framework for ground-reaction-force-aware humanoid locomotion under diverse footwear conditions. QuietWalk employs an inverse-dynamics-constrained physics-informed neural network (PINN) to estimate per-foot vertical ground reaction forces (GRFs) from proprioceptive signals, and integrates the frozen predictor into the RL training loop to penalize predicted impact forces without requiring force sensors at deployment.On a held-out real-robot dataset, enforcing inverse-dynamics consistency reduces vertical GRF prediction errors by 82%-86% compared with a purely supervised predictor and improves the coefficient of determination from 0.39/0.67 to 0.99/0.99 for the left/right feet. On hardware at 1.2 m/s (barefoot; averaged over four floor materials), QuietWalk reduces mean A-weighted noise level by 7.17 dB and peak noise level by 4.98 dB under a consistent recording setup. Cross-footwear experiments (barefoot, skate shoes, athletic sneakers, and high heels) across multiple surfaces further demonstrate robust adaptation to footwear-induced contact variations.
Abstract:It remains challenging to achieve human-like locomotion in legged robots due to fundamental discrepancies between biological and mechanical structures. Although imitation learning has emerged as a promising approach for generating natural robotic movements, simply replicating joint angle trajectories fails to capture the underlying principles of human motion. This study proposes a Gait Divergence Analysis Framework (GDAF), a unified biomechanical evaluation framework that systematically quantifies kinematic and kinetic discrepancies between humans and bipedal robots. We apply GDAF to systematically compare human and humanoid locomotion across 28 walking speeds. To enable reproducible analysis, we collect and release a speed-continuous humanoid locomotion dataset from a state-of-the-art humanoid controller. We further provide an open-source implementation of GDAF, including analysis, visualization, and MuJoCo-based tools, enabling quantitative, interpretable, and reproducible biomechanical analysis of humanoid locomotion. Results demonstrate that despite visually human-like motion generated by modern humanoid controllers, significant biomechanical divergence persists across speeds. Robots exhibit systematic deviations in gait symmetry, energy distribution, and joint coordination, indicating that substantial room remains for improving the biomechanical fidelity and energetic efficiency of humanoid locomotion. This work provides a quantitative benchmark for evaluating humanoid locomotion and offers data and versatile tools to support the development of more human-like and energetically efficient locomotion controllers. The data and code will be made publicly available upon acceptance of the paper.




Abstract:Loco-Manipulation for humanoid robots aims to enable robots to integrate mobility with upper-body tracking capabilities. Most existing approaches adopt hierarchical architectures that decompose control into isolated upper-body (manipulation) and lower-body (locomotion) policies. While this decomposition reduces training complexity, it inherently limits coordination between subsystems and contradicts the unified whole-body control exhibited by humans. We demonstrate that a single unified policy can achieve a combination of tracking accuracy, large workspace, and robustness for humanoid loco-manipulation. We propose the Unified Loco-Manipulation Controller (ULC), a single-policy framework that simultaneously tracks root velocity, root height, torso rotation, and dual-arm joint positions in an end-to-end manner, proving the feasibility of unified control without sacrificing performance. We achieve this unified control through key technologies: sequence skill acquisition for progressive learning complexity, residual action modeling for fine-grained control adjustments, command polynomial interpolation for smooth motion transitions, random delay release for robustness to deploy variations, load randomization for generalization to external disturbances, and center-of-gravity tracking for providing explicit policy gradients to maintain stability. We validate our method on the Unitree G1 humanoid robot with 3-DOF (degrees-of-freedom) waist. Compared with strong baselines, ULC shows better tracking performance to disentangled methods and demonstrating larger workspace coverage. The unified dual-arm tracking enables precise manipulation under external loads while maintaining coordinated whole-body control for complex loco-manipulation tasks.