Recent Transformer-based 3D object detectors learn point cloud features either from point- or voxel-based representations. However, the former requires time-consuming sampling while the latter introduces quantization errors. In this paper, we present a novel Point-Voxel Transformer for single-stage 3D detection (PVT-SSD) that takes advantage of these two representations. Specifically, we first use voxel-based sparse convolutions for efficient feature encoding. Then, we propose a Point-Voxel Transformer (PVT) module that obtains long-range contexts in a cheap manner from voxels while attaining accurate positions from points. The key to associating the two different representations is our introduced input-dependent Query Initialization module, which could efficiently generate reference points and content queries. Then, PVT adaptively fuses long-range contextual and local geometric information around reference points into content queries. Further, to quickly find the neighboring points of reference points, we design the Virtual Range Image module, which generalizes the native range image to multi-sensor and multi-frame. The experiments on several autonomous driving benchmarks verify the effectiveness and efficiency of the proposed method. Code will be available at https://github.com/Nightmare-n/PVT-SSD.
With the increasing prevalence of robots in daily life, it is crucial to enable robots to construct a reliable map online to navigate in unbounded and changing environments. Although existing methods can individually achieve the goals of spatial mapping and dynamic object detection and tracking, limited research has been conducted on an effective combination of these two important abilities. The proposed framework, SMAT (Simultaneous Mapping and Tracking), integrates the front-end dynamic object detection and tracking module with the back-end static mapping module using a self-reinforcing mechanism, which promotes mutual improvement of mapping and tracking performance. The conducted experiments demonstrate the framework's effectiveness in real-world applications, achieving successful long-range navigation and mapping in multiple urban environments using only one LiDAR, a CPU-only onboard computer, and a consumer-level GPS receiver.
Deriving strategies for multiple agents under adversarial scenarios poses a significant challenge in attaining both optimality and efficiency. In this paper, we propose an efficient defense strategy for cooperative defense against a group of attackers in a convex environment. The defenders aim to minimize the total number of attackers that successfully enter the target set without prior knowledge of the attacker's strategy. Our approach involves a two-scale method that decomposes the problem into coordination against a single attacker and assigning defenders to attackers. We first develop a coordination strategy for multiple defenders against a single attacker, implementing online convex programming. This results in the maximum defense-winning region of initial joint states from which the defender can successfully defend against a single attacker. We then propose an allocation algorithm that significantly reduces computational effort required to solve the induced integer linear programming problem. The allocation guarantees defense performance enhancement as the game progresses. We perform various simulations to verify the efficiency of our algorithm compared to the state-of-the-art approaches, including the one using the Gazabo platform with Robot Operating System.
In this paper, we focus on the problem of category-level object pose estimation, which is challenging due to the large intra-category shape variation. 3D graph convolution (3D-GC) based methods have been widely used to extract local geometric features, but they have limitations for complex shaped objects and are sensitive to noise. Moreover, the scale and translation invariant properties of 3D-GC restrict the perception of an object's size and translation information. In this paper, we propose a simple network structure, the HS-layer, which extends 3D-GC to extract hybrid scope latent features from point cloud data for category-level object pose estimation tasks. The proposed HS-layer: 1) is able to perceive local-global geometric structure and global information, 2) is robust to noise, and 3) can encode size and translation information. Our experiments show that the simple replacement of the 3D-GC layer with the proposed HS-layer on the baseline method (GPV-Pose) achieves a significant improvement, with the performance increased by 14.5% on 5d2cm metric and 10.3% on IoU75. Our method outperforms the state-of-the-art methods by a large margin (8.3% on 5d2cm, 6.9% on IoU75) on the REAL275 dataset and runs in real-time (50 FPS).
Despite the tremendous progress of Masked Autoencoders (MAE) in developing vision tasks such as image and video, exploring MAE in large-scale 3D point clouds remains challenging due to the inherent irregularity. In contrast to previous 3D MAE frameworks, which either design a complex decoder to infer masked information from maintained regions or adopt sophisticated masking strategies, we instead propose a much simpler paradigm. The core idea is to apply a \textbf{G}enerative \textbf{D}ecoder for MAE (GD-MAE) to automatically merges the surrounding context to restore the masked geometric knowledge in a hierarchical fusion manner. In doing so, our approach is free from introducing the heuristic design of decoders and enjoys the flexibility of exploring various masking strategies. The corresponding part costs less than \textbf{12\%} latency compared with conventional methods, while achieving better performance. We demonstrate the efficacy of the proposed method on several large-scale benchmarks: Waymo, KITTI, and ONCE. Consistent improvement on downstream detection tasks illustrates strong robustness and generalization capability. Not only our method reveals state-of-the-art results, but remarkably, we achieve comparable accuracy even with \textbf{20\%} of the labeled data on the Waymo dataset. The code will be released at \url{https://github.com/Nightmare-n/GD-MAE}.
Current 3D object detection methods heavily rely on an enormous amount of annotations. Semi-supervised learning can be used to alleviate this issue. Previous semi-supervised 3D object detection methods directly follow the practice of fully-supervised methods to augment labeled and unlabeled data, which is sub-optimal. In this paper, we design a data augmentation method for semi-supervised learning, which we call Semi-Sampling. Specifically, we use ground truth labels and pseudo labels to crop gt samples and pseudo samples on labeled frames and unlabeled frames, respectively. Then we can generate a gt sample database and a pseudo sample database. When training a teacher-student semi-supervised framework, we randomly select gt samples and pseudo samples to both labeled frames and unlabeled frames, making a strong data augmentation for them. Our semi-sampling can be regarded as an extension of gt-sampling to semi-supervised learning. Our method is simple but effective. We consistently improve state-of-the-art methods on ScanNet, SUN-RGBD, and KITTI benchmarks by large margins. For example, when training using only 10% labeled data on ScanNet, we achieve 3.1 mAP and 6.4 mAP improvement upon 3DIoUMatch in terms of mAP@0.25 and mAP@0.5. When training using only 1% labeled data on KITTI, we boost 3DIoUMatch by 3.5 mAP, 6.7 mAP and 14.1 mAP on car, pedestrian and cyclist classes. Codes will be made publicly available at https://github.com/LittlePey/Semi-Sampling.
This paper studies real-time motion planning and control for ball bumping motion with quadruped robots. To enable the quadruped to bump the flying ball with different initializations, we develop a nonlinear trajectory optimization-based planning scheme that jointly identifies the take-off time and state to achieve accurate ball hitting during the flight phase. Such a planning scheme employs a two-dimensional single rigid body model that achieves a satisfactory balance between accuracy and efficiency for the highly time-sensitive task. To precisely execute the planned motion, the tracking controller needs to incorporate the strict time-state constraint imposed on the take-off and ball-hitting events. To this end, we develop an improved model predictive controller that respects the critical time-state constraints. The proposed planning and control framework is validated with a real Aliengo robot. Experiments show that the problem planning approach can be computed in approximately 60ms on average, enabling successful accomplishment of the ball bumping motion with various initializations in real time.
This paper studies the problem of constructing polytopic representations of planar regions from depth camera readings. This problem is of great importance for terrain mapping in complicated environment and has great potentials in legged locomotion applications. To address the polytopic planar region characterization problem, we propose a two-stage solution scheme. At the first stage, the planar regions embedded within a sequence of depth images are extracted individually first and then merged to establish a terrain map containing only planar regions in a selected frame. To simplify the representations of the planar regions that are applicable to foothold planning for legged robots, we further approximate the extracted planar regions via low-dimensional polytopes at the second stage. With the polytopic representation, the proposed approach achieves a great balance between accuracy and simplicity. Experimental validations with RGB-D cameras are conducted to demonstrate the performance of the proposed scheme. The proposed scheme successfully characterizes the planar regions via polytopes with acceptable accuracy. More importantly, the run time of the overall perception scheme is less than 10ms (i.e., > 100Hz) throughout the tests, which strongly illustrates the advantages of our approach developed in this paper.
Emergence of massive dynamic objects will diversify spatial structures when robots navigate in urban environments. Therefore, the online removal of dynamic objects is critical. In this paper, we introduce a novel online removal framework for highly dynamic urban environments. The framework consists of the scan-to-map front-end and the map-to-map back-end modules. Both the front- and back-ends deeply integrate the visibility-based approach and map-based approach. The experiments validate the framework in highly dynamic simulation scenarios and real-world datasets.
This paper studies capturability and push recovery for quadrupedal locomotion. Despite the rich literature on capturability analysis and push recovery control for legged robots, existing tools are developed mainly for bipeds or humanoids. Distinct quadrupedal features such as point contacts and multiple swinging legs prevent direct application of these methods. To address this gap, we propose a switched systems model for quadruped dynamics, and instantiate the abstract viability concept for quadrupedal locomotion with a time-based gait. Capturability is characterized through a novel specification of dynamically balanced states that addresses the time-varying nature of quadrupedal locomotion and balance. A linear inverted pendulum (LIP) model is adopted to demonstrate the theory and show how the newly developed quadrupedal capturability can be used in motion planning for quadrupedal push recovery. We formulate and solve an explicit model predictive control (EMPC) problem whose optimal solution fully characterizes quadrupedal capturability with the LIP. Given this analysis, an optimization-based planning scheme is devised for determining footsteps and center of mass references during push recovery. To validate the effectiveness of the overall framework, we conduct numerous simulation and hardware experiments. Simulation results illustrate the necessity of considering dynamic balance for quadrupedal capturability, and the significant improvement in disturbance rejection with the proposed strategy. Experimental validations on a replica of the Mini Cheetah quadruped demonstrate an up to 100% improvement as compared with state-of-the-art.