Picture for Mingyi Hong

Mingyi Hong

DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction

Add code
Oct 04, 2024
Figure 1 for DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction
Figure 2 for DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction
Figure 3 for DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction
Figure 4 for DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction
Viaarxiv icon

DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction

Add code
Aug 24, 2024
Figure 1 for DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction
Figure 2 for DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction
Figure 3 for DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction
Figure 4 for DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction
Viaarxiv icon

Joint Demonstration and Preference Learning Improves Policy Alignment with Human Feedback

Add code
Jun 11, 2024
Viaarxiv icon

SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining

Add code
Jun 04, 2024
Figure 1 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Figure 2 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Figure 3 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Figure 4 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Viaarxiv icon

Tuning-Free Alignment of Diffusion Models with Direct Noise Optimization

Add code
May 29, 2024
Viaarxiv icon

Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment

Add code
May 29, 2024
Figure 1 for Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
Figure 2 for Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
Figure 3 for Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
Figure 4 for Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment
Viaarxiv icon

Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models

Add code
May 24, 2024
Figure 1 for Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
Figure 2 for Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
Figure 3 for Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
Figure 4 for Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
Viaarxiv icon

EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence

Add code
Apr 16, 2024
Figure 1 for EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
Figure 2 for EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
Figure 3 for EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
Figure 4 for EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
Viaarxiv icon

Pre-training Differentially Private Models with Limited Public Data

Add code
Feb 28, 2024
Figure 1 for Pre-training Differentially Private Models with Limited Public Data
Figure 2 for Pre-training Differentially Private Models with Limited Public Data
Figure 3 for Pre-training Differentially Private Models with Limited Public Data
Figure 4 for Pre-training Differentially Private Models with Limited Public Data
Viaarxiv icon

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Add code
Feb 26, 2024
Figure 1 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 2 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 3 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 4 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Viaarxiv icon