We present a target-driven navigation system to improve mapless visual navigation in indoor scenes. Our method takes a multi-view observation of a robot and a target as inputs at each time step to provide a sequence of actions that move the robot to the target without relying on odometry or GPS at runtime. The system is learned by optimizing a combinational objective encompassing three key designs. First, we propose that an agent conceives the next observation before making an action decision. This is achieved by learning a variational generative module from expert demonstrations. We then propose predicting static collision in advance, as an auxiliary task to improve safety during navigation. Moreover, to alleviate the training data imbalance problem of termination action prediction, we also introduce a target checking module to differentiate from augmenting navigation policy with a termination action. The three proposed designs all contribute to the improved training data efficiency, static collision avoidance, and navigation generalization performance, resulting in a novel target-driven mapless navigation system. Through experiments on a TurtleBot, we provide evidence that our model can be integrated into a robotic system and navigate in the real world. Videos and models can be found in the supplementary material.
We present decentralized collision avoidance algorithms for quadrotor swarms operating under uncertain state estimation. Our approach exploits the differential flatness property and feedforward linearization to approximate the quadrotor dynamics and reciprocal collision avoidance. We account for the uncertainty in position and velocity by formulating the collision constraints as chance constraints, which describe a set of velocities that avoid collisions with a specified confidence level. We present two different methods for formulating and solving the chance constraint: our first method assumes a Gaussian noise distribution. Our second method is its extension to the non-Gaussian case by using a Gaussian Mixture Model (GMM). We reformulate the linear chance constraints into equivalent deterministic constraints on mean and covariance. Subsequently, the deterministic constraints are introduced in the MPC framework to compute a local collision-free trajectory for each quadrotor. We evaluate the proposed algorithm in simulations on benchmark scenarios and highlight its benefits over prior methods. We observe that both the Gaussian and non-Gaussian methods provide improved collision avoidance performance over the deterministic method. Further, the non-Gaussian method results in a relatively shorter path length compared to Gaussian formulations. On average, the Gaussian method requires ~5ms ms to compute a local collision-free trajectory, while our non-Gaussian method is computationally more expensive and requires ~7ms ms on average in the presence of 4 agents.
Maintaining social distancing norms between humans has become an indispensable precaution to slow down the transmission of COVID-19. We present a novel method to automatically detect pairs of humans in a crowded scenario who are not adhering to the social distance constraint, i.e. about 6 feet of space between them. Our approach makes no assumption about the crowd density or pedestrian walking directions. We use a mobile robot with commodity sensors, namely an RGB-D camera and a 2-D lidar to perform collision-free navigation in a crowd and estimate the distance between all detected individuals in the camera's field of view. In addition, we also equip the robot with a thermal camera that wirelessly transmits thermal images to a security/healthcare personnel who monitors if any individual exhibits a higher than normal temperature. In indoor scenarios, our mobile robot can also be combined with static mounted CCTV cameras to further improve the performance in terms of number of social distancing breaches detected, accurately pursuing walking pedestrians etc. We highlight the performance benefits of our approach in different static and dynamic indoor scenarios.
COVID-19 pandemic has become a global challenge faced by people all over the world. Social distancing has been proved to be an effective practice to reduce the spread of COVID-19. Against this backdrop, we propose that the surveillance robots can not only monitor but also promote social distancing. Robots can be flexibly deployed and they can take precautionary actions to remind people of practicing social distancing. In this paper, we introduce a fully autonomous surveillance robot based on a quadruped platform that can promote social distancing in complex urban environments. Specifically, to achieve autonomy, we mount multiple cameras and a 3D LiDAR on the legged robot. The robot then uses an onboard real-time social distancing detection system to track nearby pedestrian groups. Next, the robot uses a crowd-aware navigation algorithm to move freely in highly dynamic scenarios. The robot finally uses a crowd-aware routing algorithm to effectively promote social distancing by using human-friendly verbal cues to send suggestions to over-crowded pedestrians. We demonstrate and validate that our robot can be operated autonomously by conducting several experiments in various urban scenarios.
We study multi-agent coverage algorithms for autonomous monitoring and patrol in urban environments. We consider scenarios in which a team of flying agents uses downward facing cameras (or similar sensors) to observe the environment outside of buildings at street-level. Buildings are considered obstacles that impede movement, and cameras are assumed to be ineffective above a maximum altitude. We study multi-agent urban coverage problems related to this scenario, including: (1) static multi-agent urban coverage, in which agents are expected to observe the environment from static locations, and (2) dynamic multi-agent urban coverage where agents move continuously through the environment. We experimentally evaluate six different multi-agent coverage methods, including: three types of ergodic coverage (that avoid buildings in different ways), lawn-mower sweep, voronoi region based control, and a naive grid method. We evaluate all algorithms with respect to four performance metrics (percent coverage, revist count, revist time, and the integral of area viewed over time), across four types of urban environments [low density, high density] x [short buildings, tall buildings], and for team sizes ranging from 2 to 25 agents. We believe this is the first extensive comparison of these methods in an urban setting. Our results highlight how the relative performance of static and dynamic methods changes based on the ratio of team size to search area, as well the relative effects that different characteristics of urban environments (tall, short, dense, sparse, mixed) have on each algorithm.
We present a novel approach to detect, segment, and reconstruct complete textured 3D models of vehicles from a single image for autonomous driving. Our approach combines the strengths of deep learning and the elegance of traditional techniques from part-based deformable model representation to produce high-quality 3D models in the presence of severe occlusions. We present a new part-based deformable vehicle model that is used for instance segmentation and automatically generate a dataset that contains dense correspondences between 2D images and 3D models. We also present a novel end-to-end deep neural network to predict dense 2D/3D mapping and highlight its benefits. Based on the dense mapping, we are able to compute precise 6-DoF poses and 3D reconstruction results at almost interactive rates on a commodity GPU. We have integrated these algorithms with an autonomous driving system. In practice, our method outperforms the state-of-the-art methods for all major vehicle parsing tasks: 2D instance segmentation by 4.4 points (mAP), 6-DoF pose estimation by 9.11 points, and 3D detection by 1.37. Moreover, we have released all of the source code, dataset, and the trained model on Github.
Current methods for trajectory prediction operate in supervised manners, and therefore require vast quantities of corresponding ground truth data for training. In this paper, we present a novel, label-free algorithm, AutoTrajectory, for trajectory extraction and prediction to use raw videos directly. To better capture the moving objects in videos, we introduce dynamic points. We use them to model dynamic motions by using a forward-backward extractor to keep temporal consistency and using image reconstruction to keep spatial consistency in an unsupervised manner. Then we aggregate dynamic points to instance points, which stand for moving objects such as pedestrians in videos. Finally, we extract trajectories by matching instance points for prediction training. To the best of our knowledge, our method is the first to achieve unsupervised learning of trajectory extraction and prediction. We evaluate the performance on well-known trajectory datasets and show that our method is effective for real-world videos and can use raw videos to further improve the performance of existing models.
In this letter, we present an online motion planning algorithm for generating smooth, collision-free trajectories for quadrotors operating in an unknown, cluttered 3D environment. Our approach constructs a non-convex safe-region termed generalized shape at each timestep, which represents the obstacle-free region in the environment. A collision-free path is computed by sampling points in the generalized shape and is used to generate a smooth, time-parameterized trajectory by minimizing snap. The generated trajectories are constrained to lie within a convex subset of the generalized shape, which ensures the quadrotor maneuvers in the local obstacle-free space. This process is repeated at each timestep to re-plan trajectories until the quadrotor reaches its goal position. We evaluate the proposed method in simulation on complex 3D environments with high obstacle densities. We observe that each re-planing computation takes $\sim1.6$ milliseconds on a single thread of an Intel Core i5-8500 3.0 GHz CPU. In addition, our method is 6-10x faster than prior online motion planning methods, and we observe less conservative behavior in complex scenarios such as narrow passages.
We present a novel approach to generate collision-free trajectories for a robot operating in close proximity with a human obstacle in an occluded environment. The self-occlusions of the robot can significantly reduce the accuracy of human motion prediction, and we present a novel deep learning-based prediction algorithm. Our formulation uses CNNs and LSTMs and we augment human-action datasets with synthetically generated occlusion information for training. We also present an occlusion-aware planner that uses our motion prediction algorithm to compute collision-free trajectories. We highlight performance of the overall approach (HMPO) in complex scenarios and observe upto 68% performance improvement in motion prediction accuracy, and 38% improvement in terms of error distance between the ground-truth and the predicted human joint positions.