Picture for Mingyi Hong

Mingyi Hong

EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence

Add code
Apr 16, 2024
Figure 1 for EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
Figure 2 for EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
Figure 3 for EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
Figure 4 for EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
Viaarxiv icon

Pre-training Differentially Private Models with Limited Public Data

Add code
Feb 28, 2024
Figure 1 for Pre-training Differentially Private Models with Limited Public Data
Figure 2 for Pre-training Differentially Private Models with Limited Public Data
Figure 3 for Pre-training Differentially Private Models with Limited Public Data
Figure 4 for Pre-training Differentially Private Models with Limited Public Data
Viaarxiv icon

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Add code
Feb 26, 2024
Figure 1 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 2 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 3 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 4 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Viaarxiv icon

A Survey of Advances in Optimization Methods for Wireless Communication System Design

Add code
Jan 22, 2024
Viaarxiv icon

MADA: Meta-Adaptive Optimizers through hyper-gradient Descent

Add code
Jan 17, 2024
Figure 1 for MADA: Meta-Adaptive Optimizers through hyper-gradient Descent
Figure 2 for MADA: Meta-Adaptive Optimizers through hyper-gradient Descent
Figure 3 for MADA: Meta-Adaptive Optimizers through hyper-gradient Descent
Figure 4 for MADA: Meta-Adaptive Optimizers through hyper-gradient Descent
Viaarxiv icon

Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate

Add code
Jan 05, 2024
Figure 1 for Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate
Figure 2 for Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate
Figure 3 for Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate
Viaarxiv icon

Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach

Add code
Nov 24, 2023
Figure 1 for Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach
Figure 2 for Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach
Figure 3 for Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach
Figure 4 for Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach
Viaarxiv icon

Demystifying Poisoning Backdoor Attacks from a Statistical Perspective

Add code
Oct 18, 2023
Viaarxiv icon

Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning

Add code
Oct 16, 2023
Viaarxiv icon

A Bayesian Approach to Robust Inverse Reinforcement Learning

Add code
Sep 15, 2023
Viaarxiv icon