Picture for Hongseok Yang

Hongseok Yang

Variational Partial Group Convolutions for Input-Aware Partial Equivariance of Rotations and Color-Shifts

Add code
Jul 05, 2024
Figure 1 for Variational Partial Group Convolutions for Input-Aware Partial Equivariance of Rotations and Color-Shifts
Figure 2 for Variational Partial Group Convolutions for Input-Aware Partial Equivariance of Rotations and Color-Shifts
Figure 3 for Variational Partial Group Convolutions for Input-Aware Partial Equivariance of Rotations and Color-Shifts
Figure 4 for Variational Partial Group Convolutions for Input-Aware Partial Equivariance of Rotations and Color-Shifts
Viaarxiv icon

An Infinite-Width Analysis on the Jacobian-Regularised Training of a Neural Network

Add code
Dec 06, 2023
Figure 1 for An Infinite-Width Analysis on the Jacobian-Regularised Training of a Neural Network
Figure 2 for An Infinite-Width Analysis on the Jacobian-Regularised Training of a Neural Network
Figure 3 for An Infinite-Width Analysis on the Jacobian-Regularised Training of a Neural Network
Figure 4 for An Infinite-Width Analysis on the Jacobian-Regularised Training of a Neural Network
Viaarxiv icon

Learning Symmetrization for Equivariance with Orbit Distance Minimization

Add code
Nov 13, 2023
Figure 1 for Learning Symmetrization for Equivariance with Orbit Distance Minimization
Figure 2 for Learning Symmetrization for Equivariance with Orbit Distance Minimization
Figure 3 for Learning Symmetrization for Equivariance with Orbit Distance Minimization
Viaarxiv icon

Regularizing Towards Soft Equivariance Under Mixed Symmetries

Add code
Jun 01, 2023
Figure 1 for Regularizing Towards Soft Equivariance Under Mixed Symmetries
Figure 2 for Regularizing Towards Soft Equivariance Under Mixed Symmetries
Figure 3 for Regularizing Towards Soft Equivariance Under Mixed Symmetries
Figure 4 for Regularizing Towards Soft Equivariance Under Mixed Symmetries
Viaarxiv icon

Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning

Add code
Feb 02, 2023
Figure 1 for Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
Figure 2 for Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
Figure 3 for Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
Figure 4 for Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning
Viaarxiv icon

Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference

Add code
Aug 22, 2022
Figure 1 for Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference
Figure 2 for Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference
Figure 3 for Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference
Figure 4 for Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference
Viaarxiv icon

Learning Symmetric Rules with SATNet

Add code
Jun 28, 2022
Figure 1 for Learning Symmetric Rules with SATNet
Figure 2 for Learning Symmetric Rules with SATNet
Figure 3 for Learning Symmetric Rules with SATNet
Figure 4 for Learning Symmetric Rules with SATNet
Viaarxiv icon

Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility

Add code
May 17, 2022
Figure 1 for Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Figure 2 for Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Figure 3 for Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Figure 4 for Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Viaarxiv icon

LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation

Add code
Feb 28, 2022
Figure 1 for LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation
Figure 2 for LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation
Figure 3 for LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation
Viaarxiv icon

Scale Mixtures of Neural Network Gaussian Processes

Add code
Jul 03, 2021
Figure 1 for Scale Mixtures of Neural Network Gaussian Processes
Figure 2 for Scale Mixtures of Neural Network Gaussian Processes
Figure 3 for Scale Mixtures of Neural Network Gaussian Processes
Figure 4 for Scale Mixtures of Neural Network Gaussian Processes
Viaarxiv icon