This paper presents a closed-form approach to constrain a flow within a given volume and around objects. The flow is guaranteed to converge and to stop at a single fixed point. We show that the obstacle avoidance problem can be inverted to enforce that the flow remains enclosed within a volume defined by a polygonal surface. We formally guarantee that such a flow will never contact the boundaries of the enclosing volume and obstacles, and will asymptotically converge towards an attractor. We further create smooth motion fields around obstacles with edges (e.g. tables). The technique enables a robot to navigate within an enclosed corridor while avoiding static and moving obstacles. It is applied on an autonomous robot (QOLO) in a static complex indoor environment and also tested in simulations with dense crowds.
We study the problem of aligning two sets of 3D geometric primitives given known correspondences. Our first contribution is to show that this primitive alignment framework unifies five perception problems including point cloud registration, primitive (mesh) registration, category-level 3D registration, absolution pose estimation (APE), and category-level APE. Our second contribution is to propose DynAMical Pose estimation (DAMP), the first general and practical algorithm to solve primitive alignment problem by simulating rigid body dynamics arising from virtual springs and damping, where the springs span the shortest distances between corresponding primitives. Our third contribution is to apply DAMP to the five perception problems in simulated and real datasets and demonstrate (i) DAMP always converges to the globally optimal solution in the first three problems with 3D-3D correspondences; (ii) although DAMP sometimes converges to suboptimal solutions in the last two problems with 2D-3D correspondences, with a simple scheme for escaping local minima, DAMP almost always succeeds. Our last contribution is to demystify the surprising empirical performance of DAMP and formally prove a global convergence result in the case of point cloud registration by charactering local stability of the equilibrium points of the underlying dynamical system.
We present a new deep learning-based adaptive control framework for nonlinear systems with multiplicatively-separable parametric uncertainty, called an adaptive Neural Contraction Metric (aNCM). The aNCM uses a neural network model of an optimal adaptive contraction metric, the existence of which guarantees asymptotic stability and exponential boundedness of system trajectories under the parametric uncertainty. In particular, we exploit the concept of a Neural Contraction Metric (NCM) to obtain a nominal provably stable robust control policy for nonlinear systems with bounded disturbances, and combine this policy with a novel adaptation law to achieve stability guarantees. We also show that the framework is applicable to adaptive control of dynamical systems modeled via basis function approximation. Furthermore, the use of neural networks in the aNCM permits its real-time implementation, resulting in broad applicability to a variety of systems. Its superiority to the state-of-the-art is illustrated with a simple cart-pole balancing task.
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d). This nested system of two flows, where the parameter-flow is constrained to lie on the compact manifold, provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem which is intrinsically related to training deep neural network architectures such as Neural ODEs. Consequently, it leads to better downstream models, as we show on the example of training reinforcement learning policies with evolution strategies, and in the supervised learning setting, by comparing with previous SOTA baselines. We provide strong convergence results for our proposed mechanism that are independent of the depth of the network, supporting our empirical studies. Our results show an intriguing connection between the theory of deep neural networks and the field of matrix flows on compact manifolds.
Neural Ordinary Differential Equations (ODEs) are elegant reinterpretations of deep networks where continuous time can replace the discrete notion of depth, ODE solvers perform forward propagation, and the adjoint method enables efficient, constant memory backpropagation. Neural ODEs are universal approximators only when they are non-autonomous, that is, the dynamics depends explicitly on time. We propose a novel family of Neural ODEs with time-varying weights, where time-dependence is non-parametric, and the smoothness of weight trajectories can be explicitly controlled to allow a tradeoff between expressiveness and efficiency. Using this enhanced expressiveness, we outperform previous Neural ODE variants in both speed and representational capacity, ultimately outperforming standard ResNet and CNN models on select image classification and video prediction tasks.
Analytical approach to SLAM problem was introduced in the recent years. In our work we investigate the method numerically with the motivation of using the algorithm in a real hardware experiments. We perform a robustness test of the algorithm and apply it to the robotic hardware in two different setups. In one we try to recover a map of the environment using bearing angle measurements and radial distance measurements. The another setup utilizes only bearing angle information.
We discuss technical results on learning function approximations using piecewise-linear basis functions, and analyze their stability and convergence using nonlinear contraction theory.
With the increased application of model-based whole-body control in legged robots, there has been a resurgence of research interest into methods for accurate system identification. An important class of methods focuses on the inertial parameters of rigid-body systems. These parameters consist of the mass, first mass moment (related to center of mass location), and rotational inertia matrix of each link. The main contribution of this paper is to formulate physical-consistency constraints on these parameters as Linear Matrix Inequalities (LMIs). The use of these constraints in identification can accelerate convergence and increase robustness to noisy data. It is critically observed that the proposed LMIs are expressed in terms of the covariance of the mass distribution, rather than its rotational moments of inertia. With this perspective, connections to the classical problem of moments in mathematics are shown to yield new bounding-volume constraints on the mass distribution of each link. While previous work ensured physical plausibility or used convex optimization in identification, the LMIs here uniquely enable both advantages. Constraints are applied to identification of a leg for the MIT Cheetah 3 robot. Detailed properties of transmission components are identified alongside link inertias, with parameter optimization carried out to global optimality through semidefinite programming.