Picture for Mingbao Lin

Mingbao Lin

UIO-LLMs: Unbiased Incremental Optimization for Long-Context LLMs

Add code
Jun 26, 2024
Viaarxiv icon

UniPTS: A Unified Framework for Proficient Post-Training Sparsity

Add code
May 29, 2024
Figure 1 for UniPTS: A Unified Framework for Proficient Post-Training Sparsity
Viaarxiv icon

Boosting Multimodal Large Language Models with Visual Tokens Withdrawal for Rapid Inference

Add code
May 09, 2024
Viaarxiv icon

ObjectAdd: Adding Objects into Image via a Training-Free Diffusion Modification Fashion

Add code
May 02, 2024
Figure 1 for ObjectAdd: Adding Objects into Image via a Training-Free Diffusion Modification Fashion
Figure 2 for ObjectAdd: Adding Objects into Image via a Training-Free Diffusion Modification Fashion
Figure 3 for ObjectAdd: Adding Objects into Image via a Training-Free Diffusion Modification Fashion
Figure 4 for ObjectAdd: Adding Objects into Image via a Training-Free Diffusion Modification Fashion
Viaarxiv icon

CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method

Add code
Apr 23, 2024
Figure 1 for CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method
Figure 2 for CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method
Figure 3 for CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method
Figure 4 for CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method
Viaarxiv icon

Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical Study

Add code
Dec 09, 2023
Figure 1 for Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical Study
Figure 2 for Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical Study
Figure 3 for Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical Study
Figure 4 for Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical Study
Viaarxiv icon

I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization

Add code
Nov 16, 2023
Viaarxiv icon

Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs

Add code
Oct 17, 2023
Figure 1 for Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Figure 2 for Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Figure 3 for Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Figure 4 for Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Viaarxiv icon

Unified and Dynamic Graph for Temporal Character Grouping in Long Videos

Add code
Aug 29, 2023
Viaarxiv icon

MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation

Add code
Aug 23, 2023
Figure 1 for MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation
Figure 2 for MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation
Figure 3 for MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation
Figure 4 for MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation
Viaarxiv icon