Alert button
Picture for Tayfun Gokmen

Tayfun Gokmen

Alert button

Fast offset corrected in-memory training

Mar 08, 2023
Malte J. Rasch, Fabio Carta, Omebayode Fagbohungbe, Tayfun Gokmen

Figure 1 for Fast offset corrected in-memory training
Figure 2 for Fast offset corrected in-memory training
Figure 3 for Fast offset corrected in-memory training
Figure 4 for Fast offset corrected in-memory training

In-memory computing with resistive crossbar arrays has been suggested to accelerate deep-learning workloads in highly efficient manner. To unleash the full potential of in-memory computing, it is desirable to accelerate the training as well as inference for large deep neural networks (DNNs). In the past, specialized in-memory training algorithms have been proposed that not only accelerate the forward and backward passes, but also establish tricks to update the weight in-memory and in parallel. However, the state-of-the-art algorithm (Tiki-Taka version 2 (TTv2)) still requires near perfect offset correction and suffers from potential biases that might occur due to programming and estimation inaccuracies, as well as longer-term instabilities of the device materials. Here we propose and describe two new and improved algorithms for in-memory computing (Chopped-TTv2 (c-TTv2) and Analog Gradient Accumulation with Dynamic reference (AGAD)), that retain the same runtime complexity but correct for any remaining offsets using choppers. These algorithms greatly relax the device requirements and thus expanding the scope of possible materials potentially employed for such fast in-memory DNN training.

* 14 pages, 10 figures 
Viaarxiv icon

Neural Network Training with Asymmetric Crosspoint Elements

Jan 31, 2022
Murat Onen, Tayfun Gokmen, Teodor K. Todorov, Tomasz Nowicki, Jesus A. del Alamo, John Rozen, Wilfried Haensch, Seyoung Kim

Figure 1 for Neural Network Training with Asymmetric Crosspoint Elements

Analog crossbar arrays comprising programmable nonvolatile resistors are under intense investigation for acceleration of deep neural network training. However, the ubiquitous asymmetric conductance modulation of practical resistive devices critically degrades the classification performance of networks trained with conventional algorithms. Here, we describe and experimentally demonstrate an alternative fully-parallel training algorithm: Stochastic Hamiltonian Descent. Instead of conventionally tuning weights in the direction of the error function gradient, this method programs the network parameters to successfully minimize the total energy (Hamiltonian) of the system that incorporates the effects of device asymmetry. We provide critical intuition on why device asymmetry is fundamentally incompatible with conventional training algorithms and how the new approach exploits it as a useful feature instead. Our technique enables immediate realization of analog deep learning accelerators based on readily available device technologies.

Viaarxiv icon

A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

Apr 05, 2021
Malte J. Rasch, Diego Moreda, Tayfun Gokmen, Manuel Le Gallo, Fabio Carta, Cindy Goldberg, Kaoutar El Maghraoui, Abu Sebastian, Vijay Narayanan

Figure 1 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Figure 2 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Figure 3 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Figure 4 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

We introduce the IBM Analog Hardware Acceleration Kit, a new and first of a kind open source toolkit to simulate analog crossbar arrays in a convenient fashion from within PyTorch (freely available at https://github.com/IBM/aihwkit). The toolkit is under active development and is centered around the concept of an "analog tile" which captures the computations performed on a crossbar array. Analog tiles are building blocks that can be used to extend existing network modules with analog components and compose arbitrary artificial neural networks (ANNs) using the flexibility of the PyTorch framework. Analog tiles can be conveniently configured to emulate a plethora of different analog hardware characteristics and their non-idealities, such as device-to-device and cycle-to-cycle variations, resistive device response curves, and weight and output noise. Additionally, the toolkit makes it possible to design custom unit cell configurations and to use advanced analog optimization algorithms such as Tiki-Taka. Moreover, the backward and update behavior can be set to "ideal" to enable hardware-aware training features for chips that target inference acceleration only. To evaluate the inference accuracy of such chips over time, we provide statistical programming noise and drift models calibrated on phase-change memory hardware. Our new toolkit is fully GPU accelerated and can be used to conveniently estimate the impact of material properties and non-idealities of future analog technology on the accuracy for arbitrary ANNs.

* Submitted to AICAS2021 
Viaarxiv icon

Algorithm for Training Neural Networks on Resistive Device Arrays

Sep 17, 2019
Tayfun Gokmen, Wilfried Haensch

Figure 1 for Algorithm for Training Neural Networks on Resistive Device Arrays
Figure 2 for Algorithm for Training Neural Networks on Resistive Device Arrays
Figure 3 for Algorithm for Training Neural Networks on Resistive Device Arrays
Figure 4 for Algorithm for Training Neural Networks on Resistive Device Arrays

Hardware architectures composed of resistive cross-point device arrays can provide significant power and speed benefits for deep neural network training workloads using stochastic gradient descent (SGD) and backpropagation (BP) algorithm. The training accuracy on this imminent analog hardware however strongly depends on the switching characteristics of the cross-point elements. One of the key requirements is that these resistive devices must change conductance in a symmetrical fashion when subjected to positive or negative pulse stimuli. Here, we present a new training algorithm, so-called the "Tiki-Taka" algorithm, that eliminates this stringent symmetry requirement. We show that device asymmetry introduces an unintentional implicit cost term into the SGD algorithm, whereas in the "Tiki-Taka" algorithm a coupled dynamical system simultaneously minimizes the original objective function of the neural network and the unintentional cost term due to device asymmetry in a self-consistent fashion. We tested the validity of this new algorithm on a range of network architectures such as fully connected, convolutional and LSTM networks. Simulation results on these various networks show that whatever accuracy is achieved using the conventional SGD algorithm with symmetric (ideal) device switching characteristics the same accuracy is also achieved using the "Tiki-Taka" algorithm with non-symmetric (non-ideal) device switching characteristics. Moreover, all the operations performed on the arrays are still parallel and therefore the implementation cost of this new algorithm on array architectures is minimal; and it maintains the aforementioned power and speed benefits. These algorithmic improvements are crucial to relax the material specification and to realize technologically viable resistive crossbar arrays that outperform digital accelerators for similar training tasks.

* 26 pages, 7 fiures 
Viaarxiv icon

Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays

Aug 02, 2019
Hyungjun Kim, Malte Rasch, Tayfun Gokmen, Takashi Ando, Hiroyuki Miyazoe, Jae-Joon Kim, John Rozen, Seyoung Kim

Figure 1 for Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays
Figure 2 for Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays
Figure 3 for Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays
Figure 4 for Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays

A resistive memory device-based computing architecture is one of the promising platforms for energy-efficient Deep Neural Network (DNN) training accelerators. The key technical challenge in realizing such accelerators is to accumulate the gradient information without a bias. Unlike the digital numbers in software which can be assigned and accessed with desired accuracy, numbers stored in resistive memory devices can only be manipulated following the physics of the device, which can significantly limit the training performance. Therefore, additional techniques and algorithm-level remedies are required to achieve the best possible performance in resistive memory device-based accelerators. In this paper, we analyze asymmetric conductance modulation characteristics in RRAM by Soft-bound synapse model and present an in-depth analysis on the relationship between device characteristics and DNN model accuracy using a 3-layer DNN trained on the MNIST dataset. We show that the imbalance between up and down update leads to a poor network performance. We introduce a concept of symmetry point and propose a zero-shifting technique which can compensate imbalance by programming the reference device and changing the zero value point of the weight. By using this zero-shifting method, we show that network performance dramatically improves for imbalanced synapse devices.

Viaarxiv icon

Training large-scale ANNs on simulated resistive crossbar arrays

Jun 06, 2019
Malte J. Rasch, Tayfun Gokmen, Wilfried Haensch

Figure 1 for Training large-scale ANNs on simulated resistive crossbar arrays
Figure 2 for Training large-scale ANNs on simulated resistive crossbar arrays
Figure 3 for Training large-scale ANNs on simulated resistive crossbar arrays
Figure 4 for Training large-scale ANNs on simulated resistive crossbar arrays

Accelerating training of artificial neural networks (ANN) with analog resistive crossbar arrays is a promising idea. While the concept has been verified on very small ANNs and toy data sets (such as MNIST), more realistically sized ANNs and datasets have not yet been tackled. However, it is to be expected that device materials and hardware design constraints, such as noisy computations, finite number of resistive states of the device materials, saturating weight and activation ranges, and limited precision of analog-to-digital converters, will cause significant challenges to the successful training of state-of-the-art ANNs. By using analog hardware aware ANN training simulations, we here explore a number of simple algorithmic compensatory measures to cope with analog noise and limited weight and output ranges and resolutions, that dramatically improve the simulated training performances on RPU arrays on intermediately to large-scale ANNs.

Viaarxiv icon

Efficient ConvNets for Analog Arrays

Jul 03, 2018
Malte J. Rasch, Tayfun Gokmen, Mattia Rigotti, Wilfried Haensch

Figure 1 for Efficient ConvNets for Analog Arrays
Figure 2 for Efficient ConvNets for Analog Arrays
Figure 3 for Efficient ConvNets for Analog Arrays
Figure 4 for Efficient ConvNets for Analog Arrays

Analog arrays are a promising upcoming hardware technology with the potential to drastically speed up deep learning. Their main advantage is that they compute matrix-vector products in constant time, irrespective of the size of the matrix. However, early convolution layers in ConvNets map very unfavorably onto analog arrays, because kernel matrices are typically small and the constant time operation needs to be sequentially iterated a large number of times, reducing the speed up advantage for ConvNets. Here, we propose to replicate the kernel matrix of a convolution layer on distinct analog arrays, and randomly divide parts of the compute among them, so that multiple kernel matrices are trained in parallel. With this modification, analog arrays execute ConvNets with an acceleration factor that is proportional to the number of kernel matrices used per layer (here tested 16-128). Despite having more free parameters, we show analytically and in numerical experiments that this convolution architecture is self-regularizing and implicitly learns similar filters across arrays. We also report superior performance on a number of datasets and increased robustness to adversarial attacks. Our investigation suggests to revise the notion that mixed analog-digital hardware is not suitable for ConvNets.

Viaarxiv icon

Training LSTM Networks with Resistive Cross-Point Devices

Jun 01, 2018
Tayfun Gokmen, Malte Rasch, Wilfried Haensch

Figure 1 for Training LSTM Networks with Resistive Cross-Point Devices
Figure 2 for Training LSTM Networks with Resistive Cross-Point Devices
Figure 3 for Training LSTM Networks with Resistive Cross-Point Devices
Figure 4 for Training LSTM Networks with Resistive Cross-Point Devices

In our previous work we have shown that resistive cross point devices, so called Resistive Processing Unit (RPU) devices, can provide significant power and speed benefits when training deep fully connected networks as well as convolutional neural networks. In this work, we further extend the RPU concept for training recurrent neural networks (RNNs) namely LSTMs. We show that the mapping of recurrent layers is very similar to the mapping of fully connected layers and therefore the RPU concept can potentially provide large acceleration factors for RNNs as well. In addition, we study the effect of various device imperfections and system parameters on training performance. Symmetry of updates becomes even more crucial for RNNs; already a few percent asymmetry results in an increase in the test error compared to the ideal case trained with floating point numbers. Furthermore, the input signal resolution to device arrays needs to be at least 7 bits for successful training. However, we show that a stochastic rounding scheme can reduce the input signal resolution back to 5 bits. Further, we find that RPU device variations and hardware noise are enough to mitigate overfitting, so that there is less need for using dropout. We note that the models trained here are roughly 1500 times larger than the fully connected network trained on MNIST dataset in terms of the total number of multiplication and summation operations performed per epoch. Thus, here we attempt to study the validity of the RPU approach for large scale networks.

* 17 pages, 5 figures 
Viaarxiv icon

Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training

Jun 20, 2017
Seyoung Kim, Tayfun Gokmen, Hyung-Min Lee, Wilfried E. Haensch

Figure 1 for Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training
Figure 2 for Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training
Figure 3 for Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training
Figure 4 for Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training

Recently we have shown that an architecture based on resistive processing unit (RPU) devices has potential to achieve significant acceleration in deep neural network (DNN) training compared to today's software-based DNN implementations running on CPU/GPU. However, currently available device candidates based on non-volatile memory technologies do not satisfy all the requirements to realize the RPU concept. Here, we propose an analog CMOS-based RPU design (CMOS RPU) which can store and process data locally and can be operated in a massively parallel manner. We analyze various properties of the CMOS RPU to evaluate the functionality and feasibility for acceleration of DNN training.

Viaarxiv icon