Picture for Seyoung Kim

Seyoung Kim

Neural Network Training with Asymmetric Crosspoint Elements

Add code
Jan 31, 2022
Figure 1 for Neural Network Training with Asymmetric Crosspoint Elements
Viaarxiv icon

EiGLasso for Scalable Sparse Kronecker-Sum Inverse Covariance Estimation

Add code
May 20, 2021
Figure 1 for EiGLasso for Scalable Sparse Kronecker-Sum Inverse Covariance Estimation
Figure 2 for EiGLasso for Scalable Sparse Kronecker-Sum Inverse Covariance Estimation
Figure 3 for EiGLasso for Scalable Sparse Kronecker-Sum Inverse Covariance Estimation
Figure 4 for EiGLasso for Scalable Sparse Kronecker-Sum Inverse Covariance Estimation
Viaarxiv icon

SEMULATOR: Emulating the Dynamics of Crossbar Array-based Analog Neural System with Regression Neural Networks

Add code
Jan 19, 2021
Figure 1 for SEMULATOR: Emulating the Dynamics of Crossbar Array-based Analog Neural System with Regression Neural Networks
Figure 2 for SEMULATOR: Emulating the Dynamics of Crossbar Array-based Analog Neural System with Regression Neural Networks
Figure 3 for SEMULATOR: Emulating the Dynamics of Crossbar Array-based Analog Neural System with Regression Neural Networks
Figure 4 for SEMULATOR: Emulating the Dynamics of Crossbar Array-based Analog Neural System with Regression Neural Networks
Viaarxiv icon

Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays

Add code
Aug 02, 2019
Figure 1 for Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays
Figure 2 for Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays
Figure 3 for Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays
Figure 4 for Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays
Viaarxiv icon

Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training

Add code
Jun 20, 2017
Figure 1 for Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training
Figure 2 for Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training
Figure 3 for Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training
Figure 4 for Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training
Viaarxiv icon

Large-Scale Optimization Algorithms for Sparse Conditional Gaussian Graphical Models

Add code
Dec 26, 2015
Figure 1 for Large-Scale Optimization Algorithms for Sparse Conditional Gaussian Graphical Models
Figure 2 for Large-Scale Optimization Algorithms for Sparse Conditional Gaussian Graphical Models
Figure 3 for Large-Scale Optimization Algorithms for Sparse Conditional Gaussian Graphical Models
Figure 4 for Large-Scale Optimization Algorithms for Sparse Conditional Gaussian Graphical Models
Viaarxiv icon

Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping

Add code
Sep 28, 2012
Figure 1 for Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping
Figure 2 for Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping
Figure 3 for Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping
Figure 4 for Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping
Viaarxiv icon

Smoothing proximal gradient method for general structured sparse regression

Add code
Jun 29, 2012
Figure 1 for Smoothing proximal gradient method for general structured sparse regression
Figure 2 for Smoothing proximal gradient method for general structured sparse regression
Figure 3 for Smoothing proximal gradient method for general structured sparse regression
Figure 4 for Smoothing proximal gradient method for general structured sparse regression
Viaarxiv icon

Smoothing Proximal Gradient Method for General Structured Sparse Learning

Add code
Feb 14, 2012
Figure 1 for Smoothing Proximal Gradient Method for General Structured Sparse Learning
Figure 2 for Smoothing Proximal Gradient Method for General Structured Sparse Learning
Figure 3 for Smoothing Proximal Gradient Method for General Structured Sparse Learning
Figure 4 for Smoothing Proximal Gradient Method for General Structured Sparse Learning
Viaarxiv icon

Graph-Structured Multi-task Regression and an Efficient Optimization Method for General Fused Lasso

Add code
May 20, 2010
Figure 1 for Graph-Structured Multi-task Regression and an Efficient Optimization Method for General Fused Lasso
Figure 2 for Graph-Structured Multi-task Regression and an Efficient Optimization Method for General Fused Lasso
Figure 3 for Graph-Structured Multi-task Regression and an Efficient Optimization Method for General Fused Lasso
Figure 4 for Graph-Structured Multi-task Regression and an Efficient Optimization Method for General Fused Lasso
Viaarxiv icon