Alert button

"Object Detection": models, code, and papers
Alert button

Event-based YOLO Object Detection: Proof of Concept for Forward Perception System

Add code
Alert button
Jan 10, 2023
Waseem Shariff, Muhammad Ali Farooq, Joe Lemley, Peter Corcoran

Figure 1 for Event-based YOLO Object Detection: Proof of Concept for Forward Perception System
Figure 2 for Event-based YOLO Object Detection: Proof of Concept for Forward Perception System
Figure 3 for Event-based YOLO Object Detection: Proof of Concept for Forward Perception System
Figure 4 for Event-based YOLO Object Detection: Proof of Concept for Forward Perception System

Neuromorphic vision or event vision is an advanced vision technology, where in contrast to the visible camera that outputs pixels, the event vision generates neuromorphic events every time there is a brightness change which exceeds a specific threshold in the field of view (FOV). This study focuses on leveraging neuromorphic event data for roadside object detection. This is a proof of concept towards building artificial intelligence (AI) based pipelines which can be used for forward perception systems for advanced vehicular applications. The focus is on building efficient state-of-the-art object detection networks with better inference results for fast-moving forward perception using an event camera. In this article, the event-simulated A2D2 dataset is manually annotated and trained on two different YOLOv5 networks (small and large variants). To further assess its robustness, single model testing and ensemble model testing are carried out.

* 7 pages, 9 figures, ICMV conference 2022 

Rethinking Voxelization and Classification for 3D Object Detection

Add code
Alert button
Jan 10, 2023
Youshaa Murhij, Alexander Golodkov, Dmitry Yudin

Figure 1 for Rethinking Voxelization and Classification for 3D Object Detection
Figure 2 for Rethinking Voxelization and Classification for 3D Object Detection
Figure 3 for Rethinking Voxelization and Classification for 3D Object Detection
Figure 4 for Rethinking Voxelization and Classification for 3D Object Detection

The main challenge in 3D object detection from LiDAR point clouds is achieving real-time performance without affecting the reliability of the network. In other words, the detecting network must be confident enough about its predictions. In this paper, we present a solution to improve network inference speed and precision at the same time by implementing a fast dynamic voxelizer that works on fast pillar-based models in the same way a voxelizer works on slow voxel-based models. In addition, we propose a lightweight detection sub-head model for classifying predicted objects and filter out false detected objects that significantly improves model precision in a negligible time and computing cost. The developed code is publicly available at: https://github.com/YoushaaMurhij/RVCDet.

* Accepted in ICONIP 2022. arXiv admin note: text overlap with arXiv:1902.06326 by other authors 

MonoEdge: Monocular 3D Object Detection Using Local Perspectives

Add code
Alert button
Jan 04, 2023
Minghan Zhu, Lingting Ge, Panqu Wang, Huei Peng

Figure 1 for MonoEdge: Monocular 3D Object Detection Using Local Perspectives
Figure 2 for MonoEdge: Monocular 3D Object Detection Using Local Perspectives
Figure 3 for MonoEdge: Monocular 3D Object Detection Using Local Perspectives

We propose a novel approach for monocular 3D object detection by leveraging local perspective effects of each object. While the global perspective effect shown as size and position variations has been exploited for monocular 3D detection extensively, the local perspectives has long been overlooked. We design a local perspective module to regress a newly defined variable named keyedge-ratios as the parameterization of the local shape distortion to account for the local perspective, and derive the object depth and yaw angle from it. Theoretically, this module does not rely on the pixel-wise size or position in the image of the objects, therefore independent of the camera intrinsic parameters. By plugging this module in existing monocular 3D object detection frameworks, we incorporate the local perspective distortion with global perspective effect for monocular 3D reasoning, and we demonstrate the effectiveness and superior performance over strong baseline methods in multiple datasets.

* WACV 2023 

Model-Agnostic Hierarchical Attention for 3D Object Detection

Add code
Alert button
Jan 06, 2023
Manli Shu, Le Xue, Ning Yu, Roberto Martín-Martín, Juan Carlos Niebles, Caiming Xiong, Ran Xu

Figure 1 for Model-Agnostic Hierarchical Attention for 3D Object Detection
Figure 2 for Model-Agnostic Hierarchical Attention for 3D Object Detection
Figure 3 for Model-Agnostic Hierarchical Attention for 3D Object Detection
Figure 4 for Model-Agnostic Hierarchical Attention for 3D Object Detection

Transformers as versatile network architectures have recently seen great success in 3D point cloud object detection. However, the lack of hierarchy in a plain transformer makes it difficult to learn features at different scales and restrains its ability to extract localized features. Such limitation makes them have imbalanced performance on objects of different sizes, with inferior performance on smaller ones. In this work, we propose two novel attention mechanisms as modularized hierarchical designs for transformer-based 3D detectors. To enable feature learning at different scales, we propose Simple Multi-Scale Attention that builds multi-scale tokens from a single-scale input feature. For localized feature aggregation, we propose Size-Adaptive Local Attention with adaptive attention ranges for every bounding box proposal. Both of our attention modules are model-agnostic network layers that can be plugged into existing point cloud transformers for end-to-end training. We evaluate our method on two widely used indoor 3D point cloud object detection benchmarks. By plugging our proposed modules into the state-of-the-art transformer-based 3D detector, we improve the previous best results on both benchmarks, with the largest improvement margin on small objects.

Small Moving Object Detection Algorithm Based on Motion Information

Add code
Alert button
Jan 05, 2023
Ziwei Sun, Zexi Hua, Hengcao Li

Figure 1 for Small Moving Object Detection Algorithm Based on Motion Information
Figure 2 for Small Moving Object Detection Algorithm Based on Motion Information
Figure 3 for Small Moving Object Detection Algorithm Based on Motion Information
Figure 4 for Small Moving Object Detection Algorithm Based on Motion Information

A Samll Moving Object Detection algorithm Based on Motion Information (SMOD-BMI) was proposed to detect small moving objects with low Signal-to-Noise Ratio (SNR). Firstly, To capture suspicious moving objects, a ConvLSTM-SCM-PAN model structure was designed, in which the Convolutional Long and Short Time Memory (ConvLSTM) network fused temporal and spatial information, the Selective Concatenate Module (SCM) was selected to solve the problem of channel unbalance during feature fusion, and the Path Aggregation Network (PAN) located the suspicious moving objects. Then, an object tracking algorithm is used to track suspicious moving objects and calculate their Motion Range (MR). At the same time, according to the moving speed of the suspicious moving objects, the size of their MR is adjusted adaptively (To be specific, if the objects move slowly, we expand their MR according their speed to ensure the contextual environment information) to obtain their Adaptive Candidate Motion Range (ACMR), so as to ensure that the SNR of the moving object is improved while the necessary context information is retained adaptively. Finally, a LightWeight SCM U-Shape Net (LW-SCM-USN) based on ACMR with a SCM module is designed to classify and locate small moving objects accurately and quickly. In this paper, the moving bird in surveillance video is used as the experimental dataset to verify the performance of the algorithm. The experimental results show that the proposed small moving object detection method based on motion information can effectively reduce the missing rate and false detection rate, and its performance is better than the existing moving small object detection method of SOTA.

GPTR: Gestalt-Perception Transformer for Diagram Object Detection

Add code
Alert button
Dec 29, 2022
Xin Hu, Lingling Zhang, Jun Liu, Jinfu Fan, Yang You, Yaqiang Wu

Figure 1 for GPTR: Gestalt-Perception Transformer for Diagram Object Detection
Figure 2 for GPTR: Gestalt-Perception Transformer for Diagram Object Detection
Figure 3 for GPTR: Gestalt-Perception Transformer for Diagram Object Detection
Figure 4 for GPTR: Gestalt-Perception Transformer for Diagram Object Detection

Diagram object detection is the key basis of practical applications such as textbook question answering. Because the diagram mainly consists of simple lines and color blocks, its visual features are sparser than those of natural images. In addition, diagrams usually express diverse knowledge, in which there are many low-frequency object categories in diagrams. These lead to the fact that traditional data-driven detection model is not suitable for diagrams. In this work, we propose a gestalt-perception transformer model for diagram object detection, which is based on an encoder-decoder architecture. Gestalt perception contains a series of laws to explain human perception, that the human visual system tends to perceive patches in an image that are similar, close or connected without abrupt directional changes as a perceptual whole object. Inspired by these thoughts, we build a gestalt-perception graph in transformer encoder, which is composed of diagram patches as nodes and the relationships between patches as edges. This graph aims to group these patches into objects via laws of similarity, proximity, and smoothness implied in these edges, so that the meaningful objects can be effectively detected. The experimental results demonstrate that the proposed GPTR achieves the best results in the diagram object detection task. Our model also obtains comparable results over the competitors in natural image object detection.

Super Sparse 3D Object Detection

Add code
Alert button
Jan 05, 2023
Lue Fan, Yuxue Yang, Feng Wang, Naiyan Wang, Zhaoxiang Zhang

Figure 1 for Super Sparse 3D Object Detection
Figure 2 for Super Sparse 3D Object Detection
Figure 3 for Super Sparse 3D Object Detection
Figure 4 for Super Sparse 3D Object Detection

As the perception range of LiDAR expands, LiDAR-based 3D object detection contributes ever-increasingly to the long-range perception in autonomous driving. Mainstream 3D object detectors often build dense feature maps, where the cost is quadratic to the perception range, making them hardly scale up to the long-range settings. To enable efficient long-range detection, we first propose a fully sparse object detector termed FSD. FSD is built upon the general sparse voxel encoder and a novel sparse instance recognition (SIR) module. SIR groups the points into instances and applies highly-efficient instance-wise feature extraction. The instance-wise grouping sidesteps the issue of the center feature missing, which hinders the design of the fully sparse architecture. To further enjoy the benefit of fully sparse characteristic, we leverage temporal information to remove data redundancy and propose a super sparse detector named FSD++. FSD++ first generates residual points, which indicate the point changes between consecutive frames. The residual points, along with a few previous foreground points, form the super sparse input data, greatly reducing data redundancy and computational overhead. We comprehensively analyze our method on the large-scale Waymo Open Dataset, and state-of-the-art performance is reported. To showcase the superiority of our method in long-range detection, we also conduct experiments on Argoverse 2 Dataset, where the perception range ($200m$) is much larger than Waymo Open Dataset ($75m$). Code is open-sourced at https://github.com/tusen-ai/SST.

* Extension of Fully Sparse 3D Object Detection [arXiv:2207.10035] 

HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection

Add code
Alert button
Jan 08, 2023
Bin Tang, Zhengyi Liu, Yacheng Tan, Qian He

Figure 1 for HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection
Figure 2 for HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection
Figure 3 for HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection
Figure 4 for HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection

The High-Resolution Transformer (HRFormer) can maintain high-resolution representation and share global receptive fields. It is friendly towards salient object detection (SOD) in which the input and output have the same resolution. However, two critical problems need to be solved for two-modality SOD. One problem is two-modality fusion. The other problem is the HRFormer output's fusion. To address the first problem, a supplementary modality is injected into the primary modality by using global optimization and an attention mechanism to select and purify the modality at the input level. To solve the second problem, a dual-direction short connection fusion module is used to optimize the output features of HRFormer, thereby enhancing the detailed representation of objects at the output level. The proposed model, named HRTransNet, first introduces an auxiliary stream for feature extraction of supplementary modality. Then, features are injected into the primary modality at the beginning of each multi-resolution branch. Next, HRFormer is applied to achieve forwarding propagation. Finally, all the output features with different resolutions are aggregated by intra-feature and inter-feature interactive transformers. Application of the proposed model results in impressive improvement for driving two-modality SOD tasks, e.g., RGB-D, RGB-T, and light field SOD.https://github.com/liuzywen/HRTransNet

* TCSVT2022  

CAT: LoCalization and IdentificAtion Cascade Detection Transformer for Open-World Object Detection

Add code
Alert button
Jan 05, 2023
Shuailei Ma, Yuefeng Wang, Jiaqi Fan, Ying Wei, Thomas H. Li, Hongli Liu, Fanbing Lv

Figure 1 for CAT: LoCalization and IdentificAtion Cascade Detection Transformer for Open-World Object Detection
Figure 2 for CAT: LoCalization and IdentificAtion Cascade Detection Transformer for Open-World Object Detection
Figure 3 for CAT: LoCalization and IdentificAtion Cascade Detection Transformer for Open-World Object Detection
Figure 4 for CAT: LoCalization and IdentificAtion Cascade Detection Transformer for Open-World Object Detection

Open-world object detection (OWOD), as a more general and challenging goal, requires the model trained from data on known objects to detect both known and unknown objects and incrementally learn to identify these unknown objects. The existing works which employ standard detection framework and fixed pseudo-labelling mechanism (PLM) have the following problems: (i) The inclusion of detecting unknown objects substantially reduces the model's ability to detect known ones. (ii) The PLM does not adequately utilize the priori knowledge of inputs. (iii) The fixed selection manner of PLM cannot guarantee that the model is trained in the right direction. We observe that humans subconsciously prefer to focus on all foreground objects and then identify each one in detail, rather than localize and identify a single object simultaneously, for alleviating the confusion. This motivates us to propose a novel solution called CAT: LoCalization and IdentificAtion Cascade Detection Transformer which decouples the detection process via the shared decoder in the cascade decoding way. In the meanwhile, we propose the self-adaptive pseudo-labelling mechanism which combines the model-driven with input-driven PLM and self-adaptively generates robust pseudo-labels for unknown objects, significantly improving the ability of CAT to retrieve unknown objects. Comprehensive experiments on two benchmark datasets, i.e., MS-COCO and PASCAL VOC, show that our model outperforms the state-of-the-art in terms of all metrics in the task of OWOD, incremental object detection (IOD) and open-set detection.

Fewer is More: Efficient Object Detection in Large Aerial Images

Add code
Alert button
Dec 26, 2022
Xingxing Xie, Gong Cheng, Qingyang Li, Shicheng Miao, Ke Li, Junwei Han

Figure 1 for Fewer is More: Efficient Object Detection in Large Aerial Images
Figure 2 for Fewer is More: Efficient Object Detection in Large Aerial Images
Figure 3 for Fewer is More: Efficient Object Detection in Large Aerial Images
Figure 4 for Fewer is More: Efficient Object Detection in Large Aerial Images

Current mainstream object detection methods for large aerial images usually divide large images into patches and then exhaustively detect the objects of interest on all patches, no matter whether there exist objects or not. This paradigm, although effective, is inefficient because the detectors have to go through all patches, severely hindering the inference speed. This paper presents an Objectness Activation Network (OAN) to help detectors focus on fewer patches but achieve more efficient inference and more accurate results, enabling a simple and effective solution to object detection in large images. In brief, OAN is a light fully-convolutional network for judging whether each patch contains objects or not, which can be easily integrated into many object detectors and jointly trained with them end-to-end. We extensively evaluate our OAN with five advanced detectors. Using OAN, all five detectors acquire more than 30.0% speed-up on three large-scale aerial image datasets, meanwhile with consistent accuracy improvements. On extremely large Gaofen-2 images (29200$\times$27620 pixels), our OAN improves the detection speed by 70.5%. Moreover, we extend our OAN to driving-scene object detection and 4K video object detection, boosting the detection speed by 112.1% and 75.0%, respectively, without sacrificing the accuracy. Code is available at https://github.com/Ranchosky/OAN.