Alert button
Picture for Peter Sadowski

Peter Sadowski

Alert button

Diffusion Models for High-Resolution Solar Forecasts

Feb 01, 2023
Yusuke Hatanaka, Yannik Glaser, Geoff Galgon, Giuseppe Torri, Peter Sadowski

Figure 1 for Diffusion Models for High-Resolution Solar Forecasts
Figure 2 for Diffusion Models for High-Resolution Solar Forecasts
Figure 3 for Diffusion Models for High-Resolution Solar Forecasts
Figure 4 for Diffusion Models for High-Resolution Solar Forecasts

Forecasting future weather and climate is inherently difficult. Machine learning offers new approaches to increase the accuracy and computational efficiency of forecasts, but current methods are unable to accurately model uncertainty in high-dimensional predictions. Score-based diffusion models offer a new approach to modeling probability distributions over many dependent variables, and in this work, we demonstrate how they provide probabilistic forecasts of weather and climate variables at unprecedented resolution, speed, and accuracy. We apply the technique to day-ahead solar irradiance forecasts by generating many samples from a diffusion model trained to super-resolve coarse-resolution numerical weather predictions to high-resolution weather satellite observations.

Viaarxiv icon

Tourbillon: a Physically Plausible Neural Architecture

Jul 22, 2021
Mohammadamin Tavakoli, Peter Sadowski, Pierre Baldi

Figure 1 for Tourbillon: a Physically Plausible Neural Architecture
Figure 2 for Tourbillon: a Physically Plausible Neural Architecture
Figure 3 for Tourbillon: a Physically Plausible Neural Architecture
Figure 4 for Tourbillon: a Physically Plausible Neural Architecture

In a physical neural system, backpropagation is faced with a number of obstacles including: the need for labeled data, the violation of the locality learning principle, the need for symmetric connections, and the lack of modularity. Tourbillon is a new architecture that addresses all these limitations. At its core, it consists of a stack of circular autoencoders followed by an output layer. The circular autoencoders are trained in self-supervised mode by recirculation algorithms and the top layer in supervised mode by stochastic gradient descent, with the option of propagating error information through the entire stack using non-symmetric connections. While the Tourbillon architecture is meant primarily to address physical constraints, and not to improve current engineering applications of deep learning, we demonstrate its viability on standard benchmark datasets including MNIST, Fashion MNIST, and CIFAR10. We show that Tourbillon can achieve comparable performance to models trained with backpropagation and outperform models that are trained with other physically plausible algorithms, such as feedback alignment.

Viaarxiv icon

Sherpa: Robust Hyperparameter Optimization for Machine Learning

May 08, 2020
Lars Hertel, Julian Collado, Peter Sadowski, Jordan Ott, Pierre Baldi

Figure 1 for Sherpa: Robust Hyperparameter Optimization for Machine Learning
Figure 2 for Sherpa: Robust Hyperparameter Optimization for Machine Learning
Figure 3 for Sherpa: Robust Hyperparameter Optimization for Machine Learning
Figure 4 for Sherpa: Robust Hyperparameter Optimization for Machine Learning

Sherpa is a hyperparameter optimization library for machine learning models. It is specifically designed for problems with computationally expensive, iterative function evaluations, such as the hyperparameter tuning of deep neural networks. With Sherpa, scientists can quickly optimize hyperparameters using a variety of powerful and interchangeable algorithms. Sherpa can be run on either a single machine or in parallel on a cluster. Finally, an interactive dashboard enables users to view the progress of models as they are trained, cancel trials, and explore which hyperparameter combinations are working best. Sherpa empowers machine learning practitioners by automating the more tedious aspects of model tuning. Its source code and documentation are available at https://github.com/sherpa-ai/sherpa.

Viaarxiv icon

Learning in the Machine: the Symmetries of the Deep Learning Channel

Dec 22, 2017
Pierre Baldi, Peter Sadowski, Zhiqin Lu

Figure 1 for Learning in the Machine: the Symmetries of the Deep Learning Channel
Figure 2 for Learning in the Machine: the Symmetries of the Deep Learning Channel
Figure 3 for Learning in the Machine: the Symmetries of the Deep Learning Channel
Figure 4 for Learning in the Machine: the Symmetries of the Deep Learning Channel

In a physical neural system, learning rules must be local both in space and time. In order for learning to occur, non-local information must be communicated to the deep synapses through a communication channel, the deep learning channel. We identify several possible architectures for this learning channel (Bidirectional, Conjoined, Twin, Distinct) and six symmetry challenges: 1) symmetry of architectures; 2) symmetry of weights; 3) symmetry of neurons; 4) symmetry of derivatives; 5) symmetry of processing; and 6) symmetry of learning rules. Random backpropagation (RBP) addresses the second and third symmetry, and some of its variations, such as skipped RBP (SRBP) address the first and the fourth symmetry. Here we address the last two desirable symmetries showing through simulations that they can be achieved and that the learning channel is particularly robust to symmetry variations. Specifically, random backpropagation and its variations can be performed with the same non-linear neurons used in the main input-output forward channel, and the connections in the learning channel can be adapted using the same algorithm used in the forward channel, removing the need for any specialized hardware in the learning channel. Finally, we provide mathematical results in simple cases showing that the learning equations in the forward and backward channels converge to fixed points, for almost any initial conditions. In symmetric architectures, if the weights in both channels are small at initialization, adaptation in both channels leads to weights that are essentially symmetric during and after learning. Biological connections are discussed.

Viaarxiv icon

Learning in the Machine: Random Backpropagation and the Deep Learning Channel

Dec 22, 2017
Pierre Baldi, Peter Sadowski, Zhiqin Lu

Figure 1 for Learning in the Machine: Random Backpropagation and the Deep Learning Channel
Figure 2 for Learning in the Machine: Random Backpropagation and the Deep Learning Channel
Figure 3 for Learning in the Machine: Random Backpropagation and the Deep Learning Channel
Figure 4 for Learning in the Machine: Random Backpropagation and the Deep Learning Channel

Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the taxing requirement of maintaining symmetric weights in a physical neural system. To better understand random backpropagation, we first connect it to the notions of local learning and learning channels. Through this connection, we derive several alternatives to RBP, including skipped RBP (SRPB), adaptive RBP (ARBP), sparse RBP, and their combinations (e.g. ASRBP) and analyze their computational complexity. We then study their behavior through simulations using the MNIST and CIFAR-10 bechnmark datasets. These simulations show that most of these variants work robustly, almost as well as backpropagation, and that multiplication by the derivatives of the activation functions is important. As a follow-up, we study also the low-end of the number of bits required to communicate error information over the learning channel. We then provide partial intuitive explanations for some of the remarkable properties of RBP and its variations. Finally, we prove several mathematical results, including the convergence to fixed points of linear chains of arbitrary length, the convergence to fixed points of linear autoencoders with decorrelated data, the long-term existence of solutions for linear systems with a single hidden layer and convergence in special cases, and the convergence to fixed points of non-linear chains, when the derivative of the activation functions is included.

Viaarxiv icon

Efficient Antihydrogen Detection in Antimatter Physics by Deep Learning

Jun 06, 2017
Peter Sadowski, Balint Radics, Ananya, Yasunori Yamazaki, Pierre Baldi

Figure 1 for Efficient Antihydrogen Detection in Antimatter Physics by Deep Learning
Figure 2 for Efficient Antihydrogen Detection in Antimatter Physics by Deep Learning
Figure 3 for Efficient Antihydrogen Detection in Antimatter Physics by Deep Learning
Figure 4 for Efficient Antihydrogen Detection in Antimatter Physics by Deep Learning

Antihydrogen is at the forefront of antimatter research at the CERN Antiproton Decelerator. Experiments aiming to test the fundamental CPT symmetry and antigravity effects require the efficient detection of antihydrogen annihilation events, which is performed using highly granular tracking detectors installed around an antimatter trap. Improving the efficiency of the antihydrogen annihilation detection plays a central role in the final sensitivity of the experiments. We propose deep learning as a novel technique to analyze antihydrogen annihilation data, and compare its performance with a traditional track and vertex reconstruction method. We report that the deep learning approach yields significant improvement, tripling event coverage while simultaneously improving performance by over 5% in terms of Area Under Curve (AUC).

Viaarxiv icon

Decorrelated Jet Substructure Tagging using Adversarial Neural Networks

Mar 10, 2017
Chase Shimmin, Peter Sadowski, Pierre Baldi, Edison Weik, Daniel Whiteson, Edward Goul, Andreas Søgaard

Figure 1 for Decorrelated Jet Substructure Tagging using Adversarial Neural Networks
Figure 2 for Decorrelated Jet Substructure Tagging using Adversarial Neural Networks
Figure 3 for Decorrelated Jet Substructure Tagging using Adversarial Neural Networks
Figure 4 for Decorrelated Jet Substructure Tagging using Adversarial Neural Networks

We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted resonance decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially-trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a parametric dependence on the signal hypothesis, training a single network that provides optimized, interpolatable decorrelated jet tagging across a continuous range of hypothetical resonance masses, after training on discrete choices of the signal mass.

* Phys. Rev. D 96, 074034 (2017)  
Viaarxiv icon

Revealing Fundamental Physics from the Daya Bay Neutrino Experiment using Deep Neural Networks

Dec 06, 2016
Evan Racah, Seyoon Ko, Peter Sadowski, Wahid Bhimji, Craig Tull, Sang-Yun Oh, Pierre Baldi, Prabhat

Figure 1 for Revealing Fundamental Physics from the Daya Bay Neutrino Experiment using Deep Neural Networks
Figure 2 for Revealing Fundamental Physics from the Daya Bay Neutrino Experiment using Deep Neural Networks
Figure 3 for Revealing Fundamental Physics from the Daya Bay Neutrino Experiment using Deep Neural Networks
Figure 4 for Revealing Fundamental Physics from the Daya Bay Neutrino Experiment using Deep Neural Networks

Experiments in particle physics produce enormous quantities of data that must be analyzed and interpreted by teams of physicists. This analysis is often exploratory, where scientists are unable to enumerate the possible types of signal prior to performing the experiment. Thus, tools for summarizing, clustering, visualizing and classifying high-dimensional data are essential. In this work, we show that meaningful physical content can be revealed by transforming the raw data into a learned high-level representation using deep neural networks, with measurements taken at the Daya Bay Neutrino Experiment as a case study. We further show how convolutional deep neural networks can provide an effective classification filter with greater than 97% accuracy across different classes of physics events, significantly better than other machine learning approaches.

Viaarxiv icon

A Theory of Local Learning, the Learning Channel, and the Optimality of Backpropagation

Oct 21, 2016
Pierre Baldi, Peter Sadowski

Figure 1 for A Theory of Local Learning, the Learning Channel, and the Optimality of Backpropagation
Figure 2 for A Theory of Local Learning, the Learning Channel, and the Optimality of Backpropagation
Figure 3 for A Theory of Local Learning, the Learning Channel, and the Optimality of Backpropagation
Figure 4 for A Theory of Local Learning, the Learning Channel, and the Optimality of Backpropagation

In a physical neural system, where storage and processing are intimately intertwined, the rules for adjusting the synaptic weights can only depend on variables that are available locally, such as the activity of the pre- and post-synaptic neurons, resulting in local learning rules. A systematic framework for studying the space of local learning rules is obtained by first specifying the nature of the local variables, and then the functional form that ties them together into each learning rule. Such a framework enables also the systematic discovery of new learning rules and exploration of relationships between learning rules and group symmetries. We study polynomial local learning rules stratified by their degree and analyze their behavior and capabilities in both linear and non-linear units and networks. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires local deep learning where target information is communicated to the deep layers through a backward learning channel. The nature of the communicated information about the targets and the structure of the learning channel partition the space of learning algorithms. We estimate the learning channel capacity associated with several algorithms and show that backpropagation outperforms them by simultaneously maximizing the information rate and minimizing the computational cost, even in recurrent networks. The theory clarifies the concept of Hebbian learning, establishes the power and limitations of local learning rules, introduces the learning channel which enables a formal analysis of the optimality of backpropagation, and explains the sparsity of the space of learning rules discovered so far.

* Neural Networks, vol. 83, pp. 51-74, Nov. 2016  
Viaarxiv icon