Alert button
Picture for Laurent Daudet

Laurent Daudet

Alert button

Signal processing after quadratic random sketching with optical units

Jul 27, 2023
Rémi Delogne, Vincent Schellekens, Laurent Daudet, Laurent Jacques

Figure 1 for Signal processing after quadratic random sketching with optical units
Figure 2 for Signal processing after quadratic random sketching with optical units

Random data sketching (or projection) is now a classical technique enabling, for instance, approximate numerical linear algebra and machine learning algorithms with reduced computational complexity and memory. In this context, the possibility of performing data processing (such as pattern detection or classification) directly in the sketched domain without accessing the original data was previously achieved for linear random sketching methods and compressive sensing. In this work, we show how to estimate simple signal processing tasks (such as deducing local variations in a image) directly using random quadratic projections achieved by an optical processing unit. The same approach allows for naive data classification methods directly operated in the sketched domain. We report several experiments confirming the power of our approach.

* Presented in ISCS23. arXiv admin note: substantial text overlap with arXiv:2212.00660, arXiv:1510.06664 
Viaarxiv icon

Signal processing with optical quadratic random sketches

Dec 01, 2022
Rémi Delogne, Vincent Schellekens, Laurent Daudet, Laurent Jacques

Figure 1 for Signal processing with optical quadratic random sketches
Figure 2 for Signal processing with optical quadratic random sketches
Figure 3 for Signal processing with optical quadratic random sketches
Figure 4 for Signal processing with optical quadratic random sketches

Random data sketching (or projection) is now a classical technique enabling, for instance, approximate numerical linear algebra and machine learning algorithms with reduced computational complexity and memory. In this context, the possibility of performing data processing (such as pattern detection or classification) directly in the sketched domain without accessing the original data was previously achieved for linear random sketching methods and compressive sensing. In this work, we show how to estimate simple signal processing tasks (such as deducing local variations in a image) directly using random quadratic projections achieved by an optical processing unit. The same approach allows for naive data classification methods directly operated in the sketched domain. We report several experiments confirming the power of our approach.

* 7 pages, 4 figures 
Viaarxiv icon

Photonic co-processors in HPC: using LightOn OPUs for Randomized Numerical Linear Algebra

May 07, 2021
Daniel Hesslow, Alessandro Cappelli, Igor Carron, Laurent Daudet, Raphaël Lafargue, Kilian Müller, Ruben Ohana, Gustave Pariente, Iacopo Poli

Figure 1 for Photonic co-processors in HPC: using LightOn OPUs for Randomized Numerical Linear Algebra
Figure 2 for Photonic co-processors in HPC: using LightOn OPUs for Randomized Numerical Linear Algebra

Randomized Numerical Linear Algebra (RandNLA) is a powerful class of methods, widely used in High Performance Computing (HPC). RandNLA provides approximate solutions to linear algebra functions applied to large signals, at reduced computational costs. However, the randomization step for dimensionality reduction may itself become the computational bottleneck on traditional hardware. Leveraging near constant-time linear random projections delivered by LightOn Optical Processing Units we show that randomization can be significantly accelerated, at negligible precision loss, in a wide range of important RandNLA algorithms, such as RandSVD or trace estimators.

* Add "This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860830" 
Viaarxiv icon

Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment

Dec 11, 2020
Julien Launay, Iacopo Poli, Kilian Müller, Gustave Pariente, Igor Carron, Laurent Daudet, Florent Krzakala, Sylvain Gigan

Figure 1 for Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment
Figure 2 for Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment
Figure 3 for Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment

The scaling hypothesis motivates the expansion of models past trillions of parameters as a path towards better performance. Recent significant developments, such as GPT-3, have been driven by this conjecture. However, as models scale-up, training them efficiently with backpropagation becomes difficult. Because model, pipeline, and data parallelism distribute parameters and gradients over compute nodes, communication is challenging to orchestrate: this is a bottleneck to further scaling. In this work, we argue that alternative training methods can mitigate these issues, and can inform the design of extreme-scale training hardware. Indeed, using a synaptically asymmetric method with a parallelizable backward pass, such as Direct Feedback Alignement, communication needs are drastically reduced. We present a photonic accelerator for Direct Feedback Alignment, able to compute random projections with trillions of parameters. We demonstrate our system on benchmark tasks, using both fully-connected and graph convolutional networks. Our hardware is the first architecture-agnostic photonic co-processor for training neural networks. This is a significant step towards building scalable hardware, able to go beyond backpropagation, and opening new avenues for deep learning.

* 6 pages, 2 figures, 1 table. Oral at the Beyond Backpropagation Workshop, NeurIPS 2020 
Viaarxiv icon

Online Change Point Detection in Molecular Dynamics With Optical Random Features

Jun 17, 2020
Amélie Chatelain, Giuseppe Luca Tommasone, Laurent Daudet, Iacopo Poli

Figure 1 for Online Change Point Detection in Molecular Dynamics With Optical Random Features
Figure 2 for Online Change Point Detection in Molecular Dynamics With Optical Random Features
Figure 3 for Online Change Point Detection in Molecular Dynamics With Optical Random Features
Figure 4 for Online Change Point Detection in Molecular Dynamics With Optical Random Features

Proteins are made of atoms constantly fluctuating, but can occasionally undergo large-scale changes. Such transitions are of biological interest, linking the structure of a protein to its function with a cell. Atomic-level simulations, such as Molecular Dynamics (MD), are used to study these events. However, molecular dynamics simulations produce time series with multiple observables, while changes often only affect a few of them. Therefore, detecting conformational changes has proven to be challenging for most change-point detection algorithms. In this work, we focus on the identification of such events given many noisy observables. In particular, we show that the No-prior-Knowledge Exponential Weighted Moving Average (NEWMA) algorithm can be used along optical hardware to successfully identify these changes in real-time. Our method does not need to distinguish between the background of a protein and the protein itself. For larger simulations, it is faster than using traditional silicon hardware and has a lower memory footprint. This technique may enhance the sampling of the conformational space of molecules. It may also be used to detect change-points in other sequential data with a large number of features.

* 15 pages, 12 figures 
Viaarxiv icon

Light-in-the-loop: using a photonics co-processor for scalable training of neural networks

Jun 03, 2020
Julien Launay, Iacopo Poli, Kilian Müller, Igor Carron, Laurent Daudet, Florent Krzakala, Sylvain Gigan

Figure 1 for Light-in-the-loop: using a photonics co-processor for scalable training of neural networks

As neural networks grow larger and more complex and data-hungry, training costs are skyrocketing. Especially when lifelong learning is necessary, such as in recommender systems or self-driving cars, this might soon become unsustainable. In this study, we present the first optical co-processor able to accelerate the training phase of digitally-implemented neural networks. We rely on direct feedback alignment as an alternative to backpropagation, and perform the error projection step optically. Leveraging the optical random projections delivered by our co-processor, we demonstrate its use to train a neural network for handwritten digits recognition.

* 2 pages, 1 figure 
Viaarxiv icon

Kernel computations from large-scale random features obtained by Optical Processing Units

Dec 02, 2019
Ruben Ohana, Jonas Wacker, Jonathan Dong, Sébastien Marmin, Florent Krzakala, Maurizio Filippone, Laurent Daudet

Figure 1 for Kernel computations from large-scale random features obtained by Optical Processing Units
Figure 2 for Kernel computations from large-scale random features obtained by Optical Processing Units
Figure 3 for Kernel computations from large-scale random features obtained by Optical Processing Units
Figure 4 for Kernel computations from large-scale random features obtained by Optical Processing Units

Approximating kernel functions with random features (RFs)has been a successful application of random projections for nonparametric estimation. However, performing random projections presents computational challenges for large-scale problems. Recently, a new optical hardware called Optical Processing Unit (OPU) has been developed for fast and energy-efficient computation of large-scale RFs in the analog domain. More specifically, the OPU performs the multiplication of input vectors by a large random matrix with complex-valued i.i.d. Gaussian entries, followed by the application of an element-wise squared absolute value operation - this last nonlinearity being intrinsic to the sensing process. In this paper, we show that this operation results in a dot-product kernel that has connections to the polynomial kernel, and we extend this computation to arbitrary powers of the feature map. Experiments demonstrate that the OPU kernel and its RF approximation achieve competitive performance in applications using kernel ridge regression and transfer learning for image classification. Crucially, thanks to the use of the OPU, these results are obtained with time and energy savings.

* 5 pages, 3 figures, submitted to ICASSP 2020 
Viaarxiv icon