Alert button
Picture for Manash Pratim Das

Manash Pratim Das

Alert button

Online Photometric Calibration of Automatic Gain Thermal Infrared Cameras

Jan 11, 2021
Manash Pratim Das, Larry Matthies, Shreyansh Daftry

Figure 1 for Online Photometric Calibration of Automatic Gain Thermal Infrared Cameras
Figure 2 for Online Photometric Calibration of Automatic Gain Thermal Infrared Cameras
Figure 3 for Online Photometric Calibration of Automatic Gain Thermal Infrared Cameras
Figure 4 for Online Photometric Calibration of Automatic Gain Thermal Infrared Cameras

Thermal infrared cameras are increasingly being used in various applications such as robot vision, industrial inspection and medical imaging, thanks to their improved resolution and portability. However, the performance of traditional computer vision techniques developed for electro-optical imagery does not directly translate to the thermal domain due to two major reasons: these algorithms require photometric assumptions to hold, and methods for photometric calibration of RGB cameras cannot be applied to thermal-infrared cameras due to difference in data acquisition and sensor phenomenology. In this paper, we take a step in this direction, and introduce a novel algorithm for online photometric calibration of thermal-infrared cameras. Our proposed method does not require any specific driver/hardware support and hence can be applied to any commercial off-the-shelf thermal IR camera. We present this in the context of visual odometry and SLAM algorithms, and demonstrate the efficacy of our proposed system through extensive experiments for both standard benchmark datasets, and real-world field tests with a thermal-infrared camera in natural outdoor environments.

* 8 pages, 6 figures, Pre-Print. This work has been submitted to the IEEE for possible publication 
Viaarxiv icon

Joint Point Cloud and Image Based Localization For Efficient Inspection in Mixed Reality

Nov 05, 2018
Manash Pratim Das, Zhen Dong, Sebastian Scherer

Figure 1 for Joint Point Cloud and Image Based Localization For Efficient Inspection in Mixed Reality
Figure 2 for Joint Point Cloud and Image Based Localization For Efficient Inspection in Mixed Reality
Figure 3 for Joint Point Cloud and Image Based Localization For Efficient Inspection in Mixed Reality
Figure 4 for Joint Point Cloud and Image Based Localization For Efficient Inspection in Mixed Reality

This paper introduces a method of structure inspection using mixed-reality headsets to reduce the human effort in reporting accurate inspection information such as fault locations in 3D coordinates. Prior to every inspection, the headset needs to be localized. While external pose estimation and fiducial marker based localization would require setup, maintenance, and manual calibration; marker-free self-localization can be achieved using the onboard depth sensor and camera. However, due to limited depth sensor range of portable mixed-reality headsets like Microsoft HoloLens, localization based on simple point cloud registration (sPCR) would require extensive mapping of the environment. Also, localization based on camera image would face the same issues as stereo ambiguities and hence depends on viewpoint. We thus introduce a novel approach to Joint Point Cloud and Image-based Localization (JPIL) for mixed-reality headsets that use visual cues and headset orientation to register small, partially overlapped point clouds and save significant manual labor and time in environment mapping. Our empirical results compared to sPCR show average 10 fold reduction of required overlap surface area that could potentially save on average 20 minutes per inspection. JPIL is not only restricted to inspection tasks but also can be essential in enabling intuitive human-robot interaction for spatial mapping and scene understanding in conjunction with other agents like autonomous robotic systems that are increasingly being deployed in outdoor environments for applications like structural inspection.

* IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, 2018  
Viaarxiv icon

5-DoF Monocular Visual Localization Over Grid Based Floor

Sep 14, 2017
Manash Pratim Das, Gaurav Gardi, Jayanta Mukhopadhyay

Figure 1 for 5-DoF Monocular Visual Localization Over Grid Based Floor
Figure 2 for 5-DoF Monocular Visual Localization Over Grid Based Floor
Figure 3 for 5-DoF Monocular Visual Localization Over Grid Based Floor
Figure 4 for 5-DoF Monocular Visual Localization Over Grid Based Floor

Reliable localization is one of the most important parts of an MAV system. Localization in an indoor GPS-denied environment is a relatively difficult problem. Current vision based algorithms track optical features to calculate odometry. We present a novel localization method which can be applied in an environment having orthogonal sets of equally spaced lines to form a grid. With the help of a monocular camera and using the properties of the grid-lines below, the MAV is localized inside each sub-cell of the grid and consequently over the entire grid for a relative localization over the grid. We demonstrate the effectiveness of our system onboard a customized MAV platform. The experimental results show that our method provides accurate 5-DoF localization over grid lines and it can be performed in real-time.

* Accepted to International Conference on Indoor Positioning and Indoor Navigation 2017 
Viaarxiv icon