Picture for Prateek Mittal

Prateek Mittal

Efficient Data Shapley for Weighted Nearest Neighbor Algorithms

Add code
Jan 20, 2024
Figure 1 for Efficient Data Shapley for Weighted Nearest Neighbor Algorithms
Figure 2 for Efficient Data Shapley for Weighted Nearest Neighbor Algorithms
Figure 3 for Efficient Data Shapley for Weighted Nearest Neighbor Algorithms
Figure 4 for Efficient Data Shapley for Weighted Nearest Neighbor Algorithms
Viaarxiv icon

Private Fine-tuning of Large Language Models with Zeroth-order Optimization

Add code
Jan 09, 2024
Figure 1 for Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Figure 2 for Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Figure 3 for Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Figure 4 for Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Viaarxiv icon

PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses

Add code
Oct 19, 2023
Figure 1 for PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses
Figure 2 for PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses
Figure 3 for PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses
Figure 4 for PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses
Viaarxiv icon

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

Add code
Oct 05, 2023
Figure 1 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Figure 2 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Figure 3 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Figure 4 for Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Viaarxiv icon

Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation

Add code
Aug 30, 2023
Figure 1 for Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation
Figure 2 for Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation
Figure 3 for Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation
Figure 4 for Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation
Viaarxiv icon

BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input Detection

Add code
Aug 23, 2023
Viaarxiv icon

Food Classification using Joint Representation of Visual and Textual Data

Add code
Aug 03, 2023
Viaarxiv icon

Visual Adversarial Examples Jailbreak Large Language Models

Add code
Jun 22, 2023
Figure 1 for Visual Adversarial Examples Jailbreak Large Language Models
Figure 2 for Visual Adversarial Examples Jailbreak Large Language Models
Figure 3 for Visual Adversarial Examples Jailbreak Large Language Models
Figure 4 for Visual Adversarial Examples Jailbreak Large Language Models
Viaarxiv icon

Differentially Private Image Classification by Learning Priors from Random Processes

Add code
Jun 08, 2023
Figure 1 for Differentially Private Image Classification by Learning Priors from Random Processes
Figure 2 for Differentially Private Image Classification by Learning Priors from Random Processes
Figure 3 for Differentially Private Image Classification by Learning Priors from Random Processes
Figure 4 for Differentially Private Image Classification by Learning Priors from Random Processes
Viaarxiv icon

Differentially Private In-Context Learning

Add code
May 02, 2023
Figure 1 for Differentially Private In-Context Learning
Figure 2 for Differentially Private In-Context Learning
Figure 3 for Differentially Private In-Context Learning
Figure 4 for Differentially Private In-Context Learning
Viaarxiv icon