Picture for Xuefei Ning

Xuefei Ning

Can LLMs Learn by Teaching? A Preliminary Study

Add code
Jun 20, 2024
Figure 1 for Can LLMs Learn by Teaching? A Preliminary Study
Figure 2 for Can LLMs Learn by Teaching? A Preliminary Study
Figure 3 for Can LLMs Learn by Teaching? A Preliminary Study
Figure 4 for Can LLMs Learn by Teaching? A Preliminary Study
Viaarxiv icon

DiTFastAttn: Attention Compression for Diffusion Transformer Models

Add code
Jun 12, 2024
Figure 1 for DiTFastAttn: Attention Compression for Diffusion Transformer Models
Figure 2 for DiTFastAttn: Attention Compression for Diffusion Transformer Models
Figure 3 for DiTFastAttn: Attention Compression for Diffusion Transformer Models
Figure 4 for DiTFastAttn: Attention Compression for Diffusion Transformer Models
Viaarxiv icon

ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation

Add code
Jun 04, 2024
Figure 1 for ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
Figure 2 for ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
Figure 3 for ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
Figure 4 for ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
Viaarxiv icon

MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization

Add code
May 30, 2024
Figure 1 for MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization
Figure 2 for MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization
Figure 3 for MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization
Figure 4 for MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization
Viaarxiv icon

HetHub: A Heterogeneous distributed hybrid training system for large-scale models

Add code
May 25, 2024
Viaarxiv icon

DiM: Diffusion Mamba for Efficient High-Resolution Image Synthesis

Add code
May 23, 2024
Figure 1 for DiM: Diffusion Mamba for Efficient High-Resolution Image Synthesis
Figure 2 for DiM: Diffusion Mamba for Efficient High-Resolution Image Synthesis
Figure 3 for DiM: Diffusion Mamba for Efficient High-Resolution Image Synthesis
Figure 4 for DiM: Diffusion Mamba for Efficient High-Resolution Image Synthesis
Viaarxiv icon

A Survey on Efficient Inference for Large Language Models

Add code
Apr 22, 2024
Viaarxiv icon

Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better

Add code
Apr 08, 2024
Figure 1 for Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better
Figure 2 for Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better
Figure 3 for Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better
Figure 4 for Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better
Viaarxiv icon

FlashEval: Towards Fast and Accurate Evaluation of Text-to-image Diffusion Generative Models

Add code
Mar 25, 2024
Figure 1 for FlashEval: Towards Fast and Accurate Evaluation of Text-to-image Diffusion Generative Models
Figure 2 for FlashEval: Towards Fast and Accurate Evaluation of Text-to-image Diffusion Generative Models
Figure 3 for FlashEval: Towards Fast and Accurate Evaluation of Text-to-image Diffusion Generative Models
Figure 4 for FlashEval: Towards Fast and Accurate Evaluation of Text-to-image Diffusion Generative Models
Viaarxiv icon

Evaluating Quantized Large Language Models

Add code
Feb 28, 2024
Figure 1 for Evaluating Quantized Large Language Models
Figure 2 for Evaluating Quantized Large Language Models
Figure 3 for Evaluating Quantized Large Language Models
Figure 4 for Evaluating Quantized Large Language Models
Viaarxiv icon