Picture for Christopher De Sa

Christopher De Sa

Model-Preserving Adaptive Rounding

Add code
May 29, 2025
Viaarxiv icon

Extracting memorized pieces of (copyrighted) books from open-weight language models

Add code
May 18, 2025
Viaarxiv icon

Compute-Optimal LLMs Provably Generalize Better With Scale

Add code
Apr 21, 2025
Viaarxiv icon

Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice

Add code
Dec 09, 2024
Figure 1 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 2 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 3 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 4 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Viaarxiv icon

Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices

Add code
Oct 03, 2024
Figure 1 for Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices
Figure 2 for Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices
Figure 3 for Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices
Figure 4 for Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices
Viaarxiv icon

QTIP: Quantization with Trellises and Incoherence Processing

Add code
Jun 17, 2024
Figure 1 for QTIP: Quantization with Trellises and Incoherence Processing
Figure 2 for QTIP: Quantization with Trellises and Incoherence Processing
Figure 3 for QTIP: Quantization with Trellises and Incoherence Processing
Figure 4 for QTIP: Quantization with Trellises and Incoherence Processing
Viaarxiv icon

Gradient Descent on Logistic Regression with Non-Separable Data and Large Step Sizes

Add code
Jun 07, 2024
Viaarxiv icon

Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity

Add code
Jun 05, 2024
Figure 1 for Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Figure 2 for Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Figure 3 for Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Figure 4 for Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Viaarxiv icon

STAT: Shrinking Transformers After Training

Add code
May 29, 2024
Figure 1 for STAT: Shrinking Transformers After Training
Figure 2 for STAT: Shrinking Transformers After Training
Figure 3 for STAT: Shrinking Transformers After Training
Figure 4 for STAT: Shrinking Transformers After Training
Viaarxiv icon

QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks

Add code
Feb 06, 2024
Viaarxiv icon