Picture for Anshul Nasery

Anshul Nasery

Are Robust LLM Fingerprints Adversarially Robust?

Add code
Sep 30, 2025
Viaarxiv icon

Scalable Fingerprinting of Large Language Models

Add code
Feb 11, 2025
Viaarxiv icon

OML: Open, Monetizable, and Loyal AI

Add code
Nov 01, 2024
Viaarxiv icon

PLeaS -- Merging Models with Permutations and Least Squares

Add code
Jul 02, 2024
Viaarxiv icon

PEEKABOO: Interactive Video Generation via Masked-Diffusion

Add code
Dec 12, 2023
Viaarxiv icon

Label Differential Privacy via Aggregation

Add code
Oct 20, 2023
Figure 1 for Label Differential Privacy via Aggregation
Figure 2 for Label Differential Privacy via Aggregation
Figure 3 for Label Differential Privacy via Aggregation
Figure 4 for Label Differential Privacy via Aggregation
Viaarxiv icon

End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates

Add code
Jun 13, 2023
Figure 1 for End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates
Figure 2 for End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates
Figure 3 for End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates
Figure 4 for End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates
Viaarxiv icon

Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks

Add code
Oct 04, 2022
Figure 1 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 2 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 3 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 4 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Viaarxiv icon

DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization

Add code
Aug 19, 2022
Figure 1 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 2 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 3 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 4 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Viaarxiv icon

Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time

Add code
Aug 15, 2021
Figure 1 for Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
Figure 2 for Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
Figure 3 for Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
Figure 4 for Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
Viaarxiv icon