Picture for Dong-Young Lim

Dong-Young Lim

Controllable Machine Unlearning via Gradient Pivoting

Add code
Oct 22, 2025
Viaarxiv icon

Flatness-Aware Stochastic Gradient Langevin Dynamics

Add code
Oct 02, 2025
Viaarxiv icon

TANDEM: Temporal Attention-guided Neural Differential Equations for Missingness in Time Series Classification

Add code
Aug 24, 2025
Viaarxiv icon

Modeling Irregular Astronomical Time Series with Neural Stochastic Delay Differential Equations

Add code
Aug 24, 2025
Figure 1 for Modeling Irregular Astronomical Time Series with Neural Stochastic Delay Differential Equations
Figure 2 for Modeling Irregular Astronomical Time Series with Neural Stochastic Delay Differential Equations
Figure 3 for Modeling Irregular Astronomical Time Series with Neural Stochastic Delay Differential Equations
Viaarxiv icon

DGSAM: Domain Generalization via Individual Sharpness-Aware Minimization

Add code
Mar 30, 2025
Viaarxiv icon

Dual Cone Gradient Descent for Training Physics-Informed Neural Networks

Add code
Sep 27, 2024
Viaarxiv icon

On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates

Add code
Nov 22, 2023
Figure 1 for On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates
Figure 2 for On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates
Viaarxiv icon

Langevin dynamics based algorithm e-TH$\varepsilon$O POULA for stochastic optimization problems with discontinuous stochastic gradient

Add code
Oct 24, 2022
Figure 1 for Langevin dynamics based algorithm e-TH$\varepsilon$O POULA for stochastic optimization problems with discontinuous stochastic gradient
Figure 2 for Langevin dynamics based algorithm e-TH$\varepsilon$O POULA for stochastic optimization problems with discontinuous stochastic gradient
Figure 3 for Langevin dynamics based algorithm e-TH$\varepsilon$O POULA for stochastic optimization problems with discontinuous stochastic gradient
Figure 4 for Langevin dynamics based algorithm e-TH$\varepsilon$O POULA for stochastic optimization problems with discontinuous stochastic gradient
Viaarxiv icon

Non-asymptotic estimates for TUSLA algorithm for non-convex learning with applications to neural networks with ReLU activation function

Add code
Jul 19, 2021
Figure 1 for Non-asymptotic estimates for TUSLA algorithm for non-convex learning with applications to neural networks with ReLU activation function
Figure 2 for Non-asymptotic estimates for TUSLA algorithm for non-convex learning with applications to neural networks with ReLU activation function
Figure 3 for Non-asymptotic estimates for TUSLA algorithm for non-convex learning with applications to neural networks with ReLU activation function
Figure 4 for Non-asymptotic estimates for TUSLA algorithm for non-convex learning with applications to neural networks with ReLU activation function
Viaarxiv icon

Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient adaptive algorithms for neural networks

Add code
May 28, 2021
Figure 1 for Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient adaptive algorithms for neural networks
Figure 2 for Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient adaptive algorithms for neural networks
Figure 3 for Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient adaptive algorithms for neural networks
Figure 4 for Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient adaptive algorithms for neural networks
Viaarxiv icon