Picture for Niloofar Mireshghallah

Niloofar Mireshghallah

Quantifying the Effect of Test Set Contamination on Generative Evaluations

Add code
Jan 07, 2026
Viaarxiv icon

Reinforcement Learning Improves Traversal of Hierarchical Knowledge in LLMs

Add code
Nov 08, 2025
Viaarxiv icon

Position: Privacy Is Not Just Memorization!

Add code
Oct 02, 2025
Viaarxiv icon

Bob's Confetti: Phonetic Memorization Attacks in Music and Video Generation

Add code
Jul 23, 2025
Viaarxiv icon

Strong Membership Inference Attacks on Massive Datasets and (Moderately) Large Language Models

Add code
May 24, 2025
Viaarxiv icon

Can Large Language Models Really Recognize Your Name?

Add code
May 20, 2025
Figure 1 for Can Large Language Models Really Recognize Your Name?
Figure 2 for Can Large Language Models Really Recognize Your Name?
Figure 3 for Can Large Language Models Really Recognize Your Name?
Figure 4 for Can Large Language Models Really Recognize Your Name?
Viaarxiv icon

A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage

Add code
Apr 28, 2025
Viaarxiv icon

ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data

Add code
Apr 20, 2025
Viaarxiv icon

Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models

Add code
Mar 15, 2025
Figure 1 for Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models
Figure 2 for Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models
Figure 3 for Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models
Figure 4 for Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models
Viaarxiv icon

Privacy Ripple Effects from Adding or Removing Personal Information in Language Model Training

Add code
Feb 21, 2025
Viaarxiv icon