Alert button
Picture for Neehar Peri

Neehar Peri

Alert button

An Empirical Analysis of Range for 3D Object Detection

Aug 08, 2023
Neehar Peri, Mengtian Li, Benjamin Wilson, Yu-Xiong Wang, James Hays, Deva Ramanan

Figure 1 for An Empirical Analysis of Range for 3D Object Detection
Figure 2 for An Empirical Analysis of Range for 3D Object Detection
Figure 3 for An Empirical Analysis of Range for 3D Object Detection
Figure 4 for An Empirical Analysis of Range for 3D Object Detection

LiDAR-based 3D detection plays a vital role in autonomous navigation. Surprisingly, although autonomous vehicles (AVs) must detect both near-field objects (for collision avoidance) and far-field objects (for longer-term planning), contemporary benchmarks focus only on near-field 3D detection. However, AVs must detect far-field objects for safe navigation. In this paper, we present an empirical analysis of far-field 3D detection using the long-range detection dataset Argoverse 2.0 to better understand the problem, and share the following insight: near-field LiDAR measurements are dense and optimally encoded by small voxels, while far-field measurements are sparse and are better encoded with large voxels. We exploit this observation to build a collection of range experts tuned for near-vs-far field detection, and propose simple techniques to efficiently ensemble models for long-range detection that improve efficiency by 33% and boost accuracy by 3.2% CDS.

* Accepted to ICCV 2023 Workshop - Robustness and Reliability of Autonomous Vehicles in the Open-World 
Viaarxiv icon

ZeroFlow: Fast Zero Label Scene Flow via Distillation

May 23, 2023
Kyle Vedder, Neehar Peri, Nathaniel Chodosh, Ishan Khatri, Eric Eaton, Dinesh Jayaraman, Yang Liu, Deva Ramanan, James Hays

Figure 1 for ZeroFlow: Fast Zero Label Scene Flow via Distillation
Figure 2 for ZeroFlow: Fast Zero Label Scene Flow via Distillation
Figure 3 for ZeroFlow: Fast Zero Label Scene Flow via Distillation
Figure 4 for ZeroFlow: Fast Zero Label Scene Flow via Distillation

Scene flow estimation is the task of describing the 3D motion field between temporally successive point clouds. State-of-the-art methods use strong priors and test-time optimization techniques, but require on the order of tens of seconds for large-scale point clouds, making them unusable as computer vision primitives for real-time applications such as open world object detection. Feed forward methods are considerably faster, running on the order of tens to hundreds of milliseconds for large-scale point clouds, but require expensive human supervision. To address both limitations, we propose Scene Flow via Distillation, a simple distillation framework that uses a label-free optimization method to produce pseudo-labels to supervise a feed forward model. Our instantiation of this framework, ZeroFlow, produces scene flow estimates in real-time on large-scale point clouds at quality competitive with state-of-the-art methods while using zero human labels. Notably, at test-time ZeroFlow is over 1000$\times$ faster than label-free state-of-the-art optimization-based methods on large-scale point clouds and over 1000$\times$ cheaper to train on unlabeled data compared to the cost of human annotation of that data. To facilitate research reuse, we release our code, trained model weights, and high quality pseudo-labels for the Argoverse 2 and Waymo Open datasets.

* 9 pages, 4 pages of Supplemental 
Viaarxiv icon

ReBound: An Open-Source 3D Bounding Box Annotation Tool for Active Learning

Mar 11, 2023
Wesley Chen, Andrew Edgley, Raunak Hota, Joshua Liu, Ezra Schwartz, Aminah Yizar, Neehar Peri, James Purtilo

Figure 1 for ReBound: An Open-Source 3D Bounding Box Annotation Tool for Active Learning
Figure 2 for ReBound: An Open-Source 3D Bounding Box Annotation Tool for Active Learning
Figure 3 for ReBound: An Open-Source 3D Bounding Box Annotation Tool for Active Learning
Figure 4 for ReBound: An Open-Source 3D Bounding Box Annotation Tool for Active Learning

In recent years, supervised learning has become the dominant paradigm for training deep-learning based methods for 3D object detection. Lately, the academic community has studied 3D object detection in the context of autonomous vehicles (AVs) using publicly available datasets such as nuScenes and Argoverse 2.0. However, these datasets may have incomplete annotations, often only labeling a small subset of objects in a scene. Although commercial services exists for 3D bounding box annotation, these are often prohibitively expensive. To address these limitations, we propose ReBound, an open-source 3D visualization and dataset re-annotation tool that works across different datasets. In this paper, we detail the design of our tool and present survey results that highlight the usability of our software. Further, we show that ReBound is effective for exploratory data analysis and can facilitate active-learning. Our code and documentation is available at https://github.com/ajedgley/ReBound

* Accepted to CHI 2023 Workshop - Intervening, Teaming, Delegating: Creating Engaging Automation Experiences (AutomationXP) 
Viaarxiv icon

A Brief Survey on Person Recognition at a Distance

Dec 17, 2022
Chrisopher B. Nalty, Neehar Peri, Joshua Gleason, Carlos D. Castillo, Shuowen Hu, Thirimachos Bourlai, Rama Chellappa

Figure 1 for A Brief Survey on Person Recognition at a Distance
Figure 2 for A Brief Survey on Person Recognition at a Distance
Figure 3 for A Brief Survey on Person Recognition at a Distance
Figure 4 for A Brief Survey on Person Recognition at a Distance

Person recognition at a distance entails recognizing the identity of an individual appearing in images or videos collected by long-range imaging systems such as drones or surveillance cameras. Despite recent advances in deep convolutional neural networks (DCNNs), this remains challenging. Images or videos collected by long-range cameras often suffer from atmospheric turbulence, blur, low-resolution, unconstrained poses, and poor illumination. In this paper, we provide a brief survey of recent advances in person recognition at a distance. In particular, we review recent work in multi-spectral face verification, person re-identification, and gait-based analysis techniques. Furthermore, we discuss the merits and drawbacks of existing approaches and identify important, yet under explored challenges for deploying remote person recognition systems in-the-wild.

* This work has been accepted to the IEEE Asilomar Conference on Signals, Systems, and Computers (ACSSC) 2022 
Viaarxiv icon

Towards Long-Tailed 3D Detection

Nov 16, 2022
Neehar Peri, Achal Dave, Deva Ramanan, Shu Kong

Figure 1 for Towards Long-Tailed 3D Detection
Figure 2 for Towards Long-Tailed 3D Detection
Figure 3 for Towards Long-Tailed 3D Detection
Figure 4 for Towards Long-Tailed 3D Detection

Contemporary autonomous vehicle (AV) benchmarks have advanced techniques for training 3D detectors, particularly on large-scale lidar data. Surprisingly, although semantic class labels naturally follow a long-tailed distribution, contemporary benchmarks focus on only a few common classes (e.g., pedestrian and car) and neglect many rare classes in-the-tail (e.g., debris and stroller). However, AVs must still detect rare classes to ensure safe operation. Moreover, semantic classes are often organized within a hierarchy, e.g., tail classes such as child and construction-worker are arguably subclasses of pedestrian. However, such hierarchical relationships are often ignored, which may lead to misleading estimates of performance and missed opportunities for algorithmic innovation. We address these challenges by formally studying the problem of Long-Tailed 3D Detection (LT3D), which evaluates on all classes, including those in-the-tail. We evaluate and innovate upon popular 3D detection codebases, such as CenterPoint and PointPillars, adapting them for LT3D. We develop hierarchical losses that promote feature sharing across common-vs-rare classes, as well as improved detection metrics that award partial credit to "reasonable" mistakes respecting the hierarchy (e.g., mistaking a child for an adult). Finally, we point out that fine-grained tail class accuracy is particularly improved via multimodal fusion of RGB images with LiDAR; simply put, small fine-grained classes are challenging to identify from sparse (lidar) geometry alone, suggesting that multimodal cues are crucial to long-tailed 3D detection. Our modifications improve accuracy by 5% AP on average for all classes, and dramatically improve AP for rare classes (e.g., stroller AP improves from 3.6 to 31.6)!

* This work has been accepted to the Conference on Robot Learning (CoRL) 2022 
Viaarxiv icon

Forecasting from LiDAR via Future Object Detection

Mar 31, 2022
Neehar Peri, Jonathon Luiten, Mengtian Li, Aljoša Ošep, Laura Leal-Taixé, Deva Ramanan

Figure 1 for Forecasting from LiDAR via Future Object Detection
Figure 2 for Forecasting from LiDAR via Future Object Detection
Figure 3 for Forecasting from LiDAR via Future Object Detection
Figure 4 for Forecasting from LiDAR via Future Object Detection

Object detection and forecasting are fundamental components of embodied perception. These two problems, however, are largely studied in isolation by the community. In this paper, we propose an end-to-end approach for detection and motion forecasting based on raw sensor measurement as opposed to ground truth tracks. Instead of predicting the current frame locations and forecasting forward in time, we directly predict future object locations and backcast to determine where each trajectory began. Our approach not only improves overall accuracy compared to other modular or end-to-end baselines, it also prompts us to rethink the role of explicit tracking for embodied perception. Additionally, by linking future and current locations in a many-to-one manner, our approach is able to reason about multiple futures, a capability that was previously considered difficult for end-to-end approaches. We conduct extensive experiments on the popular nuScenes dataset and demonstrate the empirical effectiveness of our approach. In addition, we investigate the appropriateness of reusing standard forecasting metrics for an end-to-end setup, and find a number of limitations which allow us to build simple baselines to game these metrics. We address this issue with a novel set of joint forecasting and detection metrics that extend the commonly used AP metrics from the detection community to measuring forecasting accuracy. Our code is available at https://github.com/neeharperi/FutureDet

* This work has been accepted to Computer Vision and Pattern Recognition (CVPR) 2022 
Viaarxiv icon

A Synthesis-Based Approach for Thermal-to-Visible Face Verification

Aug 21, 2021
Neehar Peri, Joshua Gleason, Carlos D. Castillo, Thirimachos Bourlai, Vishal M. Patel, Rama Chellappa

Figure 1 for A Synthesis-Based Approach for Thermal-to-Visible Face Verification
Figure 2 for A Synthesis-Based Approach for Thermal-to-Visible Face Verification
Figure 3 for A Synthesis-Based Approach for Thermal-to-Visible Face Verification
Figure 4 for A Synthesis-Based Approach for Thermal-to-Visible Face Verification

In recent years, visible-spectrum face verification systems have been shown to match expert forensic examiner recognition performance. However, such systems are ineffective in low-light and nighttime conditions. Thermal face imagery, which captures body heat emissions, effectively augments the visible spectrum, capturing discriminative facial features in scenes with limited illumination. Due to the increased cost and difficulty of obtaining diverse, paired thermal and visible spectrum datasets, algorithms and large-scale benchmarks for low-light recognition are limited. This paper presents an algorithm that achieves state-of-the-art performance on both the ARL-VTF and TUFTS multi-spectral face datasets. Importantly, we study the impact of face alignment, pixel-level correspondence, and identity classification with label smoothing for multi-spectral face synthesis and verification. We show that our proposed method is widely applicable, robust, and highly effective. In addition, we show that the proposed method significantly outperforms face frontalization methods on profile-to-frontal verification. Finally, we present MILAB-VTF(B), a challenging multi-spectral face dataset that is composed of paired thermal and visible videos. To the best of our knowledge, with face data from 400 subjects, this dataset represents the most extensive collection of publicly available indoor and long-range outdoor thermal-visible face imagery. Lastly, we show that our end-to-end thermal-to-visible face verification system provides strong performance on the MILAB-VTF(B) dataset.

Viaarxiv icon

PreferenceNet: Encoding Human Preferences in Auction Design with Deep Learning

Jun 06, 2021
Neehar Peri, Michael J. Curry, Samuel Dooley, John P. Dickerson

Figure 1 for PreferenceNet: Encoding Human Preferences in Auction Design with Deep Learning
Figure 2 for PreferenceNet: Encoding Human Preferences in Auction Design with Deep Learning
Figure 3 for PreferenceNet: Encoding Human Preferences in Auction Design with Deep Learning
Figure 4 for PreferenceNet: Encoding Human Preferences in Auction Design with Deep Learning

The design of optimal auctions is a problem of interest in economics, game theory and computer science. Despite decades of effort, strategyproof, revenue-maximizing auction designs are still not known outside of restricted settings. However, recent methods using deep learning have shown some success in approximating optimal auctions, recovering several known solutions and outperforming strong baselines when optimal auctions are not known. In addition to maximizing revenue, auction mechanisms may also seek to encourage socially desirable constraints such as allocation fairness or diversity. However, these philosophical notions neither have standardization nor do they have widely accepted formal definitions. In this paper, we propose PreferenceNet, an extension of existing neural-network-based auction mechanisms to encode constraints using (potentially human-provided) exemplars of desirable allocations. In addition, we introduce a new metric to evaluate an auction allocations' adherence to such socially desirable constraints and demonstrate that our proposed method is competitive with current state-of-the-art neural-network based auction designs. We validate our approach through human subject research and show that we are able to effectively capture real human preferences. Our code is available at https://github.com/neeharperi/PreferenceNet

* First two authors contributed equally 
Viaarxiv icon

The Devil is in the Details: Self-Supervised Attention for Vehicle Re-Identification

Apr 15, 2020
Pirazh Khorramshahi, Neehar Peri, Jun-cheng Chen, Rama Chellappa

Figure 1 for The Devil is in the Details: Self-Supervised Attention for Vehicle Re-Identification
Figure 2 for The Devil is in the Details: Self-Supervised Attention for Vehicle Re-Identification
Figure 3 for The Devil is in the Details: Self-Supervised Attention for Vehicle Re-Identification
Figure 4 for The Devil is in the Details: Self-Supervised Attention for Vehicle Re-Identification

In recent years, the research community has approached the problem of vehicle re-identification (re-id) with attention-based models, specifically focusing on regions of a vehicle containing discriminative information. These re-id methods rely on expensive key-point labels, part annotations, and additional attributes including vehicle make, model, and color. Given the large number of vehicle re-id datasets with various levels of annotations, strongly-supervised methods are unable to scale across different domains. In this paper, we present Self-supervised Attention for Vehicle Re-identification (SAVER), a novel approach to effectively learn vehicle-specific discriminative features. Through extensive experimentation, we show that SAVER improves upon the state-of-the-art on challenging vehicle re-id benchmarks including Veri-776, VehicleID, Vehicle-1M and Veri-Wild. SAVER demonstrates how proper regularization techniques significantly constrain the vehicle re-id task and help generate robust deep features.

Viaarxiv icon

A Dual Path ModelWith Adaptive Attention For Vehicle Re-Identification

May 09, 2019
Pirazh Khorramshahi, Amit Kumar, Neehar Peri, Sai Saketh Rambhatla, Jun-Cheng Chen, Rama Chellappa

Figure 1 for A Dual Path ModelWith Adaptive Attention For Vehicle Re-Identification
Figure 2 for A Dual Path ModelWith Adaptive Attention For Vehicle Re-Identification
Figure 3 for A Dual Path ModelWith Adaptive Attention For Vehicle Re-Identification
Figure 4 for A Dual Path ModelWith Adaptive Attention For Vehicle Re-Identification

In recent years, attention models have been extensively used for person and vehicle re-identification. Most reidentification methods are designed to focus attention at key-point locations. However, depending on the orientation the contribution of each key-point varies. In this paper, we present a novel dual path adaptive attention model for vehicle re-identification (AAVER). The global appearance path captures macroscopic vehicle features while the orientation conditioned part appearance path learns to capture localized discriminative features by focusing attention to the most informative key-points. Through extensive experimentation, we show that the proposed AAVER method is able to accurately re-identify vehicles in unconstrained scenarios, yielding state of the art results on the challenging dataset VeRi-776. As a byproduct, the proposed system is also able to accurately predict vehicle key-points and shows an improvement of more than 7% over state of the art.

Viaarxiv icon