Alert button
Picture for Marcelo Becker

Marcelo Becker

Alert button

Automatic Routing System for Intelligent Warehouses

Jul 13, 2023
Kelen C. T. Vivaldini, Jorge P. M. Galdames, Thales B. Pasqual, Rafael M. Sobral, Roberto C. Araújo, Marcelo Becker, Glauco A. P. Caurin

Figure 1 for Automatic Routing System for Intelligent Warehouses
Figure 2 for Automatic Routing System for Intelligent Warehouses
Figure 3 for Automatic Routing System for Intelligent Warehouses
Figure 4 for Automatic Routing System for Intelligent Warehouses

Automation of logistic processes is essential to improve productivity and reduce costs. In this context, intelligent warehouses are becoming a key to logistic systems thanks to their ability of optimizing transportation tasks and, consequently, reducing costs. This paper initially presents briefly routing systems applied on intelligent warehouses. Then, we present the approach used to develop our router system. This router system is able to solve traffic jams and collisions, generate conflict-free and optimized paths before sending the final paths to the robotic forklifts. It also verifies the progress of all tasks. When a problem occurs, the router system can change the task priorities, routes, etc. in order to avoid new conflicts. In the routing simulations, each vehicle executes its tasks starting from a predefined initial pose, moving to the desired position. Our algorithm is based on Dijkstra's shortest path and the time window approaches and it was implemented in C language. Computer simulation tests were used to validate the algorithm efficiency under different working conditions. Several simulations were carried out using the Player/Stage Simulator to test the algorithms. Thanks to the simulations, we could solve many faults and refine the algorithms before embedding them in real robots.

* 2010 IEEE International Conference on Robotics and Automation, International workshop on Robotics and Intelligent Transportation System, Full Day Workshop, May 7th 2010, Anchorage, Alaska. Organizers,Christian Laugier (INRIA, France), Ming Lin (University of North Carolina, USA), Philippe Martinet IFMA and LASMEA, France),Urbano Nunes (ISR, Portugal) 
Viaarxiv icon

Visual Localization and Mapping in Dynamic and Changing Environments

Sep 21, 2022
João Carlos Virgolino Soares, Vivian Suzano Medeiros, Gabriel Fischer Abati, Marcelo Becker, Glauco Caurin, Marcelo Gattass, Marco Antonio Meggiolaro

Figure 1 for Visual Localization and Mapping in Dynamic and Changing Environments
Figure 2 for Visual Localization and Mapping in Dynamic and Changing Environments
Figure 3 for Visual Localization and Mapping in Dynamic and Changing Environments
Figure 4 for Visual Localization and Mapping in Dynamic and Changing Environments

The real-world deployment of fully autonomous mobile robots depends on a robust SLAM (Simultaneous Localization and Mapping) system, capable of handling dynamic environments, where objects are moving in front of the robot, and changing environments, where objects are moved or replaced after the robot has already mapped the scene. This paper presents Changing-SLAM, a method for robust Visual SLAM in both dynamic and changing environments. This is achieved by using a Bayesian filter combined with a long-term data association algorithm. Also, it employs an efficient algorithm for dynamic keypoints filtering based on object detection that correctly identify features inside the bounding box that are not dynamic, preventing a depletion of features that could cause lost tracks. Furthermore, a new dataset was developed with RGB-D data especially designed for the evaluation of changing environments on an object level, called PUC-USP dataset. Six sequences were created using a mobile robot, an RGB-D camera and a motion capture system. The sequences were designed to capture different scenarios that could lead to a tracking failure or a map corruption. To the best of our knowledge, Changing-SLAM is the first Visual SLAM system that is robust to both dynamic and changing environments, not assuming a given camera pose or a known map, being also able to operate in real time. The proposed method was evaluated using benchmark datasets and compared with other state-of-the-art methods, proving to be highly accurate.

* 14 pages, 13 figures 
Viaarxiv icon

EEG-Based Epileptic Seizure Prediction Using Temporal Multi-Channel Transformers

Sep 18, 2022
Ricardo V. Godoy, Tharik J. S. Reis, Paulo H. Polegato, Gustavo J. G. Lahr, Ricardo L. Saute, Frederico N. Nakano, Helio R. Machado, Americo C. Sakamoto, Marcelo Becker, Glauco A. P. Caurin

Figure 1 for EEG-Based Epileptic Seizure Prediction Using Temporal Multi-Channel Transformers
Figure 2 for EEG-Based Epileptic Seizure Prediction Using Temporal Multi-Channel Transformers
Figure 3 for EEG-Based Epileptic Seizure Prediction Using Temporal Multi-Channel Transformers
Figure 4 for EEG-Based Epileptic Seizure Prediction Using Temporal Multi-Channel Transformers

Epilepsy is one of the most common neurological diseases, characterized by transient and unprovoked events called epileptic seizures. Electroencephalogram (EEG) is an auxiliary method used to perform both the diagnosis and the monitoring of epilepsy. Given the unexpected nature of an epileptic seizure, its prediction would improve patient care, optimizing the quality of life and the treatment of epilepsy. Predicting an epileptic seizure implies the identification of two distinct states of EEG in a patient with epilepsy: the preictal and the interictal. In this paper, we developed two deep learning models called Temporal Multi-Channel Transformer (TMC-T) and Vision Transformer (TMC-ViT), adaptations of Transformer-based architectures for multi-channel temporal signals. Moreover, we accessed the impact of choosing different preictal duration, since its length is not a consensus among experts, and also evaluated how the sample size benefits each model. Our models are compared with fully connected, convolutional, and recurrent networks. The algorithms were patient-specific trained and evaluated on raw EEG signals from the CHB-MIT database. Experimental results and statistical validation demonstrated that our TMC-ViT model surpassed the CNN architecture, state-of-the-art in seizure prediction.

* 15 pages, 10 figures 
Viaarxiv icon

Multi-Sensor Fusion based Robust Row Following for Compact Agricultural Robots

Jun 28, 2021
Andres Eduardo Baquero Velasquez, Vitor Akihiro Hisano Higuti, Mateus Valverde Gasparino, Arun Narenthiran Sivakumar, Marcelo Becker, Girish Chowdhary

Figure 1 for Multi-Sensor Fusion based Robust Row Following for Compact Agricultural Robots
Figure 2 for Multi-Sensor Fusion based Robust Row Following for Compact Agricultural Robots
Figure 3 for Multi-Sensor Fusion based Robust Row Following for Compact Agricultural Robots
Figure 4 for Multi-Sensor Fusion based Robust Row Following for Compact Agricultural Robots

This paper presents a state-of-the-art LiDAR based autonomous navigation system for under-canopy agricultural robots. Under-canopy agricultural navigation has been a challenging problem because GNSS and other positioning sensors are prone to significant errors due to attentuation and multi-path caused by crop leaves and stems. Reactive navigation by detecting crop rows using LiDAR measurements is a better alternative to GPS but suffers from challenges due to occlusion from leaves under the canopy. Our system addresses this challenge by fusing IMU and LiDAR measurements using an Extended Kalman Filter framework on low-cost hardwware. In addition, a local goal generator is introduced to provide locally optimal reference trajectories to the onboard controller. Our system is validated extensively in real-world field environments over a distance of 50.88~km on multiple robots in different field conditions across different locations. We report state-of-the-art distance between intervention results, showing that our system is able to safely navigate without interventions for 386.9~m on average in fields without significant gaps in the crop rows, 56.1~m in production fields and 47.5~m in fields with gaps (space of 1~m without plants in both sides of the row).

Viaarxiv icon