The singularities of serial robotic manipulators are those configurations in which the robot loses the ability to move in at least one direction. Hence, their identification is fundamental to enhance the performance of current control and motion planning strategies. While classical approaches entail the computation of the determinant of either a 6x n or nxn matrix for an n degrees of freedom serial robot, this work addresses a novel singularity identification method based on modelling the twists defined by the joint axes of the robot as vectors of the six-dimensional and three-dimensional geometric algebras. In particular, it consists of identifying which configurations cause the exterior product of these twists to vanish. In addition, since rotors represent rotations in geometric algebra, once these singularities have been identified, a distance function is defined in the configuration space C such that its restriction to the set of singular configurations S allows us to compute the distance of any configuration to a given singularity. This distance function is used to enhance how the singularities are handled in three different scenarios, namely motion planning, motion control and bilateral teleoperation.
This work addresses the inverse kinematics of serial robots using conformal geometric algebra. Classical approaches include either the use of homogeneous matrices, which entails high computational cost and execution time or the development of particular geometric strategies that cannot be generalized to arbitrary serial robots. In this work, we present a compact, elegant and intuitive formulation of robot kinematics based on conformal geometric algebra that provides a suitable framework for the closed-form resolution of the inverse kinematic problem for manipulators with a spherical wrist. For serial robots of this kind, the inverse kinematics problem can be split in two subproblems: the position and orientation problems. The latter is solved by appropriately splitting the rotor that defines the target orientation into three simpler rotors, while the former is solved by developing a geometric strategy for each combination of prismatic and revolute joints that forms the position part of the robot. Finally, the inverse kinematics of 7 DoF redundant manipulators with a spherical wrist is solved by extending the geometric solutions obtained in the non-redundant case.
Regularization in convolutional neural networks (CNNs) is usually addressed with dropout layers. However, dropout is sometimes detrimental in the convolutional part of a CNN as it simply sets to zero a percentage of pixels in the feature maps, adding unrepresentative examples during training. Here, we propose a CNN layer that performs regularization by applying random rotations of reflections to a small percentage of feature maps after every convolutional layer. We prove how this concept is beneficial for images with orientational symmetries, such as in medical images, as it provides a certain degree of rotational invariance. We tested this method in two datasets, a patch-based set of histopathology images (PatchCamelyon) to perform classification using a generic DenseNet, and a set of specular microscopy images of the corneal endothelium to perform segmentation using a tailored U-net, improving the performance in both cases.
The automated segmentation of cancer tissue in histopathology images can help clinicians to detect, diagnose, and analyze such disease. Different from other natural images used in many convolutional networks for benchmark, histopathology images can be extremely large, and the cancerous patterns can reach beyond 1000 pixels. Therefore, the well-known networks in the literature were never conceived to handle these peculiarities. In this work, we propose a Fully Convolutional DenseUNet that is particularly designed to solve histopathology problems. We evaluated our network in two public pathology datasets published as challenges in the recent MICCAI 2019: binary segmentation in colon cancer images (DigestPath2019), and multi-class segmentation in prostate cancer images (Gleason2019), achieving similar and better results than the winners of the challenges, respectively. Furthermore, we discussed some good practices in the training setup to yield the best performance and the main challenges in these histopathology datasets.
Efficient integration of solar energy into the electricity mix depends on a reliable anticipation of its intermittency. A promising approach to forecast the temporal variability of solar irradiance resulting from the cloud cover dynamics, is based on the analysis of sequences of ground-taken sky images. Despite encouraging results, a recurrent limitation of current Deep Learning approaches lies in the ubiquitous tendency of reacting to past observations rather than actively anticipating future events. This leads to a systematic temporal lag and little ability to predict sudden events. To address this challenge, we introduce ECLIPSE, a spatio-temporal neural network architecture that models cloud motion from sky images to predict both future segmented images and corresponding irradiance levels. We show that ECLIPSE anticipates critical events and considerably reduces temporal delay while generating visually realistic futures.
A number of industrial applications, such as smart grids, power plant operation, hybrid system management or energy trading, could benefit from improved short-term solar forecasting, addressing the intermittent energy production from solar panels. However, current approaches to modelling the cloud cover dynamics from sky images still lack precision regarding the spatial configuration of clouds, their temporal dynamics and physical interactions with solar radiation. Benefiting from a growing number of large datasets, data driven methods are being developed to address these limitations with promising results. In this study, we compare four commonly used Deep Learning architectures trained to forecast solar irradiance from sequences of hemispherical sky images and exogenous variables. To assess the relative performance of each model, we used the Forecast Skill metric based on the smart persistence model, as well as ramp and time distortion metrics. The results show that encoding spatiotemporal aspects of the sequence of sky images greatly improved the predictions with 10 min ahead Forecast Skill reaching 20.4% on the test year. However, based on the experimental data, we conclude that, with a common setup, Deep Learning models tend to behave just as a 'very smart persistence model', temporally aligned with the persistence model while mitigating its most penalising errors. Thus, despite being captured by the sky cameras, models often miss fundamental events causing large irradiance changes such as clouds obscuring the sun. We hope that our work will contribute to a shift of this approach to irradiance forecasting, from reactive to anticipatory.
Improving irradiance forecasting is critical to further increase the share of solar in the energy mix. On a short time scale, fish-eye cameras on the ground are used to capture cloud displacements causing the local variability of the electricity production. As most of the solar radiation comes directly from the Sun, current forecasting approaches use its position in the image as a reference to interpret the cloud cover dynamics. However, existing Sun tracking methods rely on external data and a calibration of the camera, which requires access to the device. To address these limitations, this study introduces an image-based Sun tracking algorithm to localise the Sun in the image when it is visible and interpolate its daily trajectory from past observations. We validate the method on a set of sky images collected over a year at SIRTA's lab. Experimental results show that the proposed method provides robust smooth Sun trajectories with a mean absolute error below 1% of the image size.
There has recently been a flurry of exciting advances in deep learning models on point clouds. However, these advances have been hampered by the difficulty of creating labelled point cloud datasets: sparse point clouds often have unclear label identities for certain points, while dense point clouds are time-consuming to annotate. Inspired by mask-based pre-training in the natural language processing community, we propose a novel pre-training mechanism for point clouds. It works by masking occluded points that result from observing the point cloud at different camera views. It then optimizes a completion model that learns how to reconstruct the occluded points, given the partial point cloud. In this way, our method learns a pre-trained representation that can identify the visual constraints inherently embedded in real-world point clouds. We call our method Occlusion Completion (OcCo). We demonstrate that OcCo learns representations that improve generalization on downstream tasks over prior pre-training methods, that transfer to different datasets, that reduce training time, and improve labelled sample efficiency. %, and (e) more effective than previous pre-training methods. Our code and dataset are available at https://github.com/hansen7/OcCo
Despite the advances in the field of solar energy, improvements of solar forecasting techniques, addressing the intermittent electricity production, remain essential for securing its future integration into a wider energy supply. A promising approach to anticipate irradiance changes consists of modeling the cloud cover dynamics from ground taken or satellite images. This work presents preliminary results on the application of deep Convolutional Neural Networks for 2 to 20 min irradiance forecasting using hemispherical sky images and exogenous variables. We evaluate the models on a set of irradiance measurements and corresponding sky images collected in Palaiseau (France) over 8 months with a temporal resolution of 2 min. To outline the learning of neural networks in the context of short-term irradiance forecasting, we implemented visualisation techniques revealing the types of patterns recognised by trained algorithms in sky images. In addition, we show that training models with past samples of the same day improves their forecast skill, relative to the smart persistence model based on the Mean Square Error, by around 10% on a 10 min ahead prediction. These results emphasise the benefit of integrating previous same-day data in short-term forecasting. This, in turn, can be achieved through model fine tuning or using recurrent units to facilitate the extraction of relevant temporal features from past data.