Picture for Qingyun Liu

Qingyun Liu

Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model

Feb 27, 2024
Figure 1 for Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model
Figure 2 for Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model
Figure 3 for Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model
Figure 4 for Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model
Viaarxiv icon

LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views

Add code
Feb 07, 2024
Viaarxiv icon

Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication

Oct 04, 2023
Figure 1 for Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication
Figure 2 for Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication
Figure 3 for Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication
Figure 4 for Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication
Viaarxiv icon