Keypoint detection serves as the basis for many computer vision and robotics applications. Despite the fact that colored point clouds can be readily obtained, most existing keypoint detectors extract only geometry-salient keypoints, which can impede the overall performance of systems that intend to (or have the potential to) leverage color information. To promote advances in such systems, we propose an efficient multi-modal keypoint detector that can extract both geometry-salient and color-salient keypoints in colored point clouds. The proposed CEntroid Distance (CED) keypoint detector comprises an intuitive and effective saliency measure, the centroid distance, that can be used in both 3D space and color space, and a multi-modal non-maximum suppression algorithm that can select keypoints with high saliency in two or more modalities. The proposed saliency measure leverages directly the distribution of points in a local neighborhood and does not require normal estimation or eigenvalue decomposition. We evaluate the proposed method in terms of repeatability and computational efficiency (i.e. running time) against state-of-the-art keypoint detectors on both synthetic and real-world datasets. Results demonstrate that our proposed CED keypoint detector requires minimal computational time while attaining high repeatability. To showcase one of the potential applications of the proposed method, we further investigate the task of colored point cloud registration. Results suggest that our proposed CED detector outperforms state-of-the-art handcrafted and learning-based keypoint detectors in the evaluated scenes. The C++ implementation of the proposed method is made publicly available at https://github.com/UCR-Robotics/CED_Detector.
This paper focuses on the emerging paradigm shift of collision-inclusive motion planning and control for impact-resilient mobile robots, and develops a unified hierarchical framework for navigation in unknown and partially-observable cluttered spaces. At the lower-level, we develop a deformation recovery control and trajectory replanning strategy that handles collisions that may occur at run-time, locally. The low-level system actively detects collisions (via embedded Hall effect sensors on a mobile robot built in-house), enables the robot to recover from them, and locally adjusts the post-impact trajectory. Then, at the higher-level, we propose a search-based planning algorithm to determine how to best utilize potential collisions to improve certain metrics, such as control energy and computational time. Our method builds upon A* with jump points. We generate a novel heuristic function, and a collision checking and adjustment technique, thus making the A* algorithm converge faster to reach the goal by exploiting and utilizing possible collisions. The overall hierarchical framework generated by combining the global A* algorithm and the local deformation recovery and replanning strategy, as well as individual components of this framework, are tested extensively both in simulation and experimentally. An ablation study draws links to related state-of-the-art search-based collision-avoidance planners (for the overall framework), as well as search-based collision-avoidance and sampling-based collision-inclusive global planners (for the higher level). Results demonstrate our method's efficacy for collision-inclusive motion planning and control in unknown environments with isolated obstacles for a class of impact-resilient robots operating in 2D.
Promoting exploratory movements through contingent feedback can positively influence motor development in infancy. Our ongoing work gears toward the development of a robot-assisted contingency learning environment through the use of small aerial robots. This paper examines whether aerial robots and their associated motion controllers can be used to achieve efficient and highly-responsive robot flight for our purpose. Infant kicking kinematic data were extracted from videos and used in simulation and physical experiments with an aerial robot. The efficacy of two standard of practice controllers was assessed: a linear PID and a nonlinear geometric controller. The ability of the robot to match infant kicking trajectories was evaluated qualitatively and quantitatively via the mean squared error (to assess overall deviation from the input infant leg trajectory signals), and dynamic time warping algorithm (to quantify the signal synchrony). Results demonstrate that it is in principle possible to track infant kicking trajectories with small aerials robots, and identify areas of further development required to improve the tracking quality.
Contemporary robots in precision agriculture focus primarily on automated harvesting or remote sensing to monitor crop health. Comparatively less work has been performed with respect to collecting physical leaf samples in the field and retaining them for further analysis. Typically, orchard growers manually collect sample leaves and utilize them for stem water potential measurements to analyze tree health and determine irrigation routines. While this technique benefits orchard management, the process of collecting, assessing, and interpreting measurements requires significant human labor and often leads to infrequent sampling. Automated sampling can provide highly accurate and timely information to growers. The first step in such automated in-situ leaf analysis is identifying and cutting a leaf from a tree. This retrieval process requires new methods for actuation and perception. We present a technique for detecting and localizing candidate leaves using point cloud data from a depth camera. This technique is tested on both indoor and outdoor point clouds from avocado trees. We then use a custom-built leaf-cutting end-effector on a 6-DOF robotic arm to test the proposed detection and localization technique by cutting leaves from an avocado tree. Experimental testing with a real avocado tree demonstrates our proposed approach can enable our mobile manipulator and custom end-effector system to successfully detect, localize, and cut leaves.
Action recognition is an important component to improve autonomy of physical rehabilitation devices, such as wearable robotic exoskeletons. Existing human action recognition algorithms focus on adult applications rather than pediatric ones. In this paper, we introduce BabyNet, a light-weight (in terms of trainable parameters) network structure to recognize infant reaching action from off-body stationary cameras. We develop an annotated dataset that includes diverse reaches performed while in a sitting posture by different infants in unconstrained environments (e.g., in home settings, etc.). Our approach uses the spatial and temporal connection of annotated bounding boxes to interpret onset and offset of reaching, and to detect a complete reaching action. We evaluate the efficiency of our proposed approach and compare its performance against other learning-based network structures in terms of capability of capturing temporal inter-dependencies and accuracy of detection of reaching onset and offset. Results indicate our BabyNet can attain solid performance in terms of (average) testing accuracy that exceeds that of other larger networks, and can hence serve as a light-weight data-driven framework for video-based infant reaching action recognition.
The Black Soldier Fly (BSF), can be an effective alternative to traditional disposal of food and agricultural waste (biowaste) such as landfills because its larvae are able to quickly transform biowaste into ready-to-use biomass. However, several challenges remain to ensure that BSF farming is economically viable at different scales and can be widely implemented. Manual labor is required to ensure optimal conditions to rear the larvae, from aerating the feeding substrate to monitoring abiotic conditions during the growth cycle. This paper introduces a proof-of-concept automated method of rearing BSF larvae to ensure optimal growing conditions while at the same time reducing manual labor. We retrofit existing BSF rearing bins with a "smart lid," named as such due to the hot-swappable nature of the lid with multiple bins. The system automatically aerates the larvae-diet substrate and provides bio-information of the larvae to users in real time. The proposed solution uses a custom aeration method and an array of sensors to create a soft real time system. Growth of larvae is monitored using thermal imaging and classical computer vision techniques. Experimental testing reveals that our automated approach produces BSF larvae on par with manual techniques.
This paper presents the design and assessment of a fabric-based soft pneumatic actuator with low pressurization requirements for actuation making it suitable for upper extremity assistive devices for infants. The goal is to support shoulder abduction and adduction without prohibiting motion in other planes or obstructing elbow joint motion. First, the performance of a family of actuator designs with internal air cells is explored via simulation. The actuators are parameterized by the number of cells and their width. Physically viable actuator variants identified through the simulation are further tested via hardware experiments. Two designs are selected and tested on a custom-built physical model based on an infant's body anthropometrics. Comparisons between force exerted to lift the arm, movement smoothness, path length and maximum shoulder angle reached inform which design is better suited for its use as an actuator for pediatric wearable assistive devices, along with other insights for future work.
Soft grippers are gaining momentum across applications due to their flexibility and dexterity. However, the infinite-dimensionality and non-linearity associated with soft robots challenge modeling and closed-loop control of soft grippers to perform grasping tasks. To solve this problem, data-driven methods have been proposed. Most data-driven methods rely on intensive model learning in simulation or offline, and as such it may be hard to generalize across different settings not explicitly trained upon and in physical robot testing where online control is required. In this paper, we propose an online modeling and control algorithm that utilizes Koopman operator theory to update an estimated model of the underlying dynamics at each time step in real-time. The learned and continuously updated models are then embedded into an online Model Predictive Control (MPC) structure and deployed onto soft multi-fingered robotic grippers. To evaluate the performance, the prediction accuracy of our approach is first compared against other model-extraction methods among different datasets. Next, the online modeling and control algorithm is tested experimentally with a soft 3-fingered gripper grasping objects of various shapes and weights unknown to the controller initially. Results indicate a high success ratio in grasping different objects using the proposed method. Sample trials can be viewed at https://youtu.be/i2hCMX7zSKQ.
This work focuses on closed-loop control based on proprioceptive feedback for a pneumatically-actuated soft wearable device aimed at future support of infant reaching tasks. The device comprises two soft pneumatic actuators (one textile-based and one silicone-casted) actively controlling two degrees-of-freedom per arm (shoulder adduction/abduction and elbow flexion/extension, respectively). Inertial measurement units (IMUs) attached to the wearable device provide real-time joint angle feedback. Device kinematics analysis is informed by anthropometric data from infants (arm lengths) reported in the literature. Range of motion and muscle co-activation patterns in infant reaching are considered to derive desired trajectories for the device's end-effector. Then, a proportional-derivative controller is developed to regulate the pressure inside the actuators and in turn move the arm along desired setpoints within the reachable workspace. Experimental results on tracking desired arm trajectories using an engineered mannequin are presented, demonstrating that the proposed controller can help guide the mannequin's wrist to the desired setpoints.
Koopman operator theory has been gaining momentum for model extraction, planning, and control of data-driven robotic systems. The Koopman operator's ability to extract dynamics from data depends heavily on the selection of an appropriate dictionary of lifting functions. In this paper we propose ACD-EDMD, a new method for Analytical Construction of Dictionaries of appropriate lifting functions for a range of data-driven Koopman operator based nonlinear robotic systems. The key insight of this work is that information about fundamental topological spaces of the nonlinear system (such as its configuration space and workspace) can be exploited to steer the construction of Hermite polynomial-based lifting functions. We show that the proposed method leads to dictionaries that are simple to implement while enjoying provable completeness and convergence guarantees when observables are weighted bounded. We evaluate ACD-EDMD using a range of diverse nonlinear robotic systems in both simulated and physical hardware experimentation (a wheeled mobile robot, a two-revolute-joint robotic arm, and a soft robotic leg). Results reveal that our method leads to dictionaries that enable high-accuracy prediction and that can generalize to diverse validation sets. The associated GitHub repository of our algorithm can be accessed at \url{https://github.com/UCR-Robotics/ACD-EDMD}.