This paper introduces an innovative guidance and control method for simultaneously capturing and stabilizing a fast-spinning target satellite, such as a spin-stabilized satellite, using a spinning-base servicing satellite equipped with a robotic manipulator, joint locks, and reaction wheels (RWs). The method involves controlling the RWs of the servicing satellite to replicate the spinning motion of the target satellite, while locking the manipulator's joints to achieve spin-matching. This maneuver makes the target stationary with respect to the rotating frame of the servicing satellite located at its center-of-mass (CoM), simplifying the robot capture trajectory planning and eliminating post-capture trajectory planning entirely. In the next phase, the joints are unlocked, and a coordination controller drives the robotic manipulator to capture the target satellite while maintaining zero relative rotation between the servicing and target satellites. The spin stabilization phase begins after completing the capture phase, where the joints are locked to form a single tumbling rigid body consisting of the rigidly connected servicing and target satellites. An optimal controller applies negative control torques to the RWs to dampen out the tumbling motion of the interconnected satellites as quickly as possible, subject to the actuation torque limit of the RWs and the maximum torque exerted by the manipulator's end-effector.
This paper presents a vision guidance and control method for autonomous robotic capture and stabilization of orbital objects in a time-critical manner. The method takes into account various operational and physical constraints, including ensuring a smooth capture, handling line-of-sight (LOS) obstructions of the target, and staying within the acceleration, force, and torque limits of the robot. Our approach involves the development of an optimal control framework for an eye-to-hand visual servoing method, which integrates two sequential sub-maneuvers: a pre-capturing maneuver and a post-capturing maneuver, aimed at achieving the shortest possible capture time. Integrating both control strategies enables a seamless transition between them, allowing for real-time switching to the appropriate control system. Moreover, both controllers are adaptively tuned through vision feedback to account for the unknown dynamics of the target. The integrated estimation and control architecture also facilitates fault detection and recovery of the visual feedback in situations where the feedback is temporarily obstructed. The experimental results demonstrate the successful execution of pre- and post-capturing operations on a tumbling and drifting target, despite multiple operational constraints.
This paper presents a method for guiding a robot manipulator to capture and bring a tumbling satellite to a state of rest. The proposed approach includes developing a coordination control for the combined system of the space robot and the target satellite, where the satellite acts as the manipulator payload. This control ensures that the robot tracks the optimal path while regulating the attitude of the chase vehicle to a desired value. Two optimal trajectories are then designed for the pre- and post-capture phases. In the pre-capturing phase, the manipulator manoeuvres are optimized by minimizing a cost function that includes the time of travel and the weighted norms of the end-effector velocity and acceleration, subject to the constraint that the robot end-effector and a grapple fixture on the satellite arrive at the rendezvous point with the same velocity. In the post-grasping phase, the manipulator dumps the initial velocity of the tumbling satellite in minimum time while ensuring that the magnitude of the torque applied to the satellite remains below a safe value. Overall, this method offers a promising solution for effectively capturing and bringing tumbling satellites to a state of rest.
The robustness and accuracy of a vision system for motion estimation of a tumbling target satellite are enhanced by an adaptive Kalman filter. This allows a vision-guided robot to complete the grasping of the target even if occlusion occurs during the operation. A complete dynamics model, including aspects of orbital mechanics, is incorporated for accurate estimation. Based on the model, an adaptive Kalman filter is developed that estimates not only the system states but also all the model parameters such as the inertia ratio, center-of-mass, and the rotation of the principal axes of the target satellite. An experiment is conducted by using a robotic arm to move a satellite mockup according to orbital mechanics while the satellite pose is measured by a laser camera system. The measurements are sent to the Kalman filter, which, in turn, drives another robotic arm to grasp the target. The results demonstrate successful grasping even if the vision system is blocked for several seconds.
In this work, we present a hybrid simulator for space docking and robotic proximity operations methodology. This methodology also allows for the emulation of a target robot operating in a complex environment by using an actual robot. The emulation scheme aims to replicate the dynamic behavior of the target robot interacting with the environment, without dealing with a complex calculation of the contact dynamics. This method forms a basis for the task verification of a flexible space robot. The actual emulating robot is structurally rigid, while the target robot can represent any class of robots, e.g., flexible, redundant, or space robots. Although the emulating robot is not dynamically equivalent to the target robot, the dynamical similarity can be achieved by using a control law developed herein. The effect of disturbances and actuator dynamics on the fidelity and the contact stability of the robot emulation is thoroughly analyzed.
This paper presents a method to control a manipulator system grasping a rigid-body payload so that the motion of the combined system in consequence of externally applied forces to be the same as another free-floating rigid-body (with different inertial properties). This allows zero-g emulation of a scaled spacecraft prototype under the test in a 1-g laboratory environment. The controller consisting of motion feedback and force/moment feedback adjusts the motion of the test spacecraft so as to match that of the flight spacecraft, even if the latter has flexible appendages (such as solar panels) and the former is rigid. The stability of the overall system is analytically investigated, and the results show that the system remains stable provided that the inertial properties of two spacecraft are different and that an upperbound on the norm of the inertia ratio of the payload to manipulator is respected. Important practical issues such as calibration and sensitivity analysis to sensor noise and quantization are also presented.
The problem of self-tuning control of cooperative manipulators forming a closed kinematic chain in the presence of an inaccurate kinematics model is addressed using adaptive machine learning. The kinematic parameters pertaining to the relative position/orientation uncertainties of the interconnected manipulators are updated online by two cascaded estimators in order to tune a cooperative controller for achieving accurate motion tracking with minimum-norm actuation force. This technique permits accurate calibration of the relative kinematics of the involved manipulators without needing high precision end-point sensing or force measurements, and hence it is economically justified. Investigating the stability of the entire real-time estimator/controller system reveals that the convergence and stability of the adaptive control process can be ensured if i) the direction of the angular velocity vector does not remain constant over time, and ii) the initial kinematic parameter error is upper bounded by a scaler function of some known parameters. The adaptive controller is proved to be singularity-free even though the control law involves inverting the approximation of a matrix computed at the estimated parameters. Experimental results demonstrate the sensitivity of the tracking performance of the conventional inverse dynamic control scheme to kinematic inaccuracies, while the tracking error is significantly reduced by the self-tuning cooperative controller.
This paper focuses on an adaptive and fault-tolerant vision-guided robotic system that enables to choose the most appropriate control action if partial or complete failure of the vision system in the short term occurs. Moreover, the autonomous robotic system takes physical and operational constraints into account to perform the demands of a specific visual servoing task in a way to minimize a cost function. A hierarchical control architecture is developed based on interwoven integration of a variant of the iterative closest point (ICP) image registration, a constrained noise-adaptive Kalman filter, a fault detection logic and recovery, together with a constrained optimal path planner. The dynamic estimator estimates unknown states and uncertain parameters required for motion prediction while imposing a set of inequality constraints for consistency of the estimation process and adjusting adaptively the Kalman filter parameters in the face of unexpected vision errors. It is followed by the implementation of a fault recovery strategy based on a fault detection logic that monitors the health of the visual feedback using the metric fit error of the image registration. Subsequently, the estimated/predicted pose and parameters are passed to an optimal path planner in order to bring the robot end-effector to the grasping point of a moving target as quickly as possible subject to multiple constraints such as acceleration limit, smooth capture, and line-of-sight angle of the target.
This paper presents a fault-tolerant 3D vision system for autonomous robotic operation. In particular, pose estimation of space objects is achieved using 3D vision data in an integrated Kalman filter (KF) and an Iterative Closest Point (ICP) algorithm in a closed-loop configuration. The initial guess for the internal ICP iteration is provided by the state estimate propagation of the Kalman filer. The Kalman filter is capable of not only estimating the target's states but also its inertial parameters. This allows the motion of the target to be predictable as soon as the filter converges. Consequently, the ICP can maintain pose tracking over a wider range of velocity due to the increased precision of ICP initialization. Furthermore, incorporation of the target's dynamics model in the estimation process allows the estimator continuously provide pose estimation even when the sensor temporally loses its signal namely due to obstruction. The capabilities of the pose estimation methodology is demonstrated by a ground testbed for Automated Rendezvous & Docking. In this experiment, Neptec's Laser Camera System (LCS) is used for real-time scanning of a satellite model attached to a manipulator arm, which is driven by a simulator according to orbital and attitude dynamics. The results showed that robust tracking of the free-floating tumbling satellite can be achieved only if the Kalman filter and ICP are in a closed-loop configuration.
This paper presents Lidar-based Simultaneous Localization and Mapping (SLAM) for autonomous driving vehicles. Fusing data from landmark sensors and a strap-down Inertial Measurement Unit (IMU) in an adaptive Kalman filter (KF) plus the observability of the system are investigated. In addition to the vehicle's states and landmark positions, a self-tuning filter estimates the IMU calibration parameters as well as the covariance of the measurement noise. The discrete-time covariance matrix of the process noise, the state transition matrix, and the observation sensitivity matrix are derived in closed-form making them suitable for real-time implementation. Examining the observability of the 3D SLAM system leads to the conclusion that the system remains observable upon a geometrical condition on the alignment of the landmarks.