Picture for Sean Lie

Sean Lie

MASSV: Multimodal Adaptation and Self-Data Distillation for Speculative Decoding of Vision-Language Models

Add code
May 15, 2025
Viaarxiv icon

SD$^2$: Self-Distilled Sparse Drafters

Add code
Apr 10, 2025
Viaarxiv icon

Self-Data Distillation for Recovering Quality in Pruned Large Language Models

Add code
Oct 15, 2024
Viaarxiv icon

Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment

Add code
May 06, 2024
Viaarxiv icon

MediSwift: Efficient Sparse Pre-trained Biomedical Language Models

Add code
Mar 01, 2024
Figure 1 for MediSwift: Efficient Sparse Pre-trained Biomedical Language Models
Figure 2 for MediSwift: Efficient Sparse Pre-trained Biomedical Language Models
Figure 3 for MediSwift: Efficient Sparse Pre-trained Biomedical Language Models
Figure 4 for MediSwift: Efficient Sparse Pre-trained Biomedical Language Models
Viaarxiv icon

Sparse Iso-FLOP Transformations for Maximizing Training Efficiency

Add code
Mar 25, 2023
Figure 1 for Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Figure 2 for Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Figure 3 for Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Figure 4 for Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Viaarxiv icon

SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models

Add code
Mar 18, 2023
Figure 1 for SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models
Figure 2 for SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models
Figure 3 for SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models
Figure 4 for SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models
Viaarxiv icon