Picture for Lichan Hong

Lichan Hong

Jay

Bridging the Gap: Unpacking the Hidden Challenges in Knowledge Distillation for Online Ranking Systems

Add code
Aug 26, 2024
Viaarxiv icon

Leveraging LLM Reasoning Enhances Personalized Recommender Systems

Add code
Jul 22, 2024
Viaarxiv icon

Aligning Large Language Models with Recommendation Knowledge

Add code
Mar 30, 2024
Viaarxiv icon

Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model

Add code
Feb 27, 2024
Figure 1 for Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model
Figure 2 for Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model
Figure 3 for Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model
Figure 4 for Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model
Viaarxiv icon

How to Train Data-Efficient LLMs

Add code
Feb 15, 2024
Viaarxiv icon

LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views

Add code
Feb 07, 2024
Figure 1 for LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views
Figure 2 for LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views
Figure 3 for LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views
Figure 4 for LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views
Viaarxiv icon

Hiformer: Heterogeneous Feature Interactions Learning with Transformers for Recommender Systems

Add code
Nov 10, 2023
Figure 1 for Hiformer: Heterogeneous Feature Interactions Learning with Transformers for Recommender Systems
Figure 2 for Hiformer: Heterogeneous Feature Interactions Learning with Transformers for Recommender Systems
Figure 3 for Hiformer: Heterogeneous Feature Interactions Learning with Transformers for Recommender Systems
Figure 4 for Hiformer: Heterogeneous Feature Interactions Learning with Transformers for Recommender Systems
Viaarxiv icon

Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication

Add code
Oct 04, 2023
Figure 1 for Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication
Figure 2 for Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication
Figure 3 for Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication
Figure 4 for Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication
Viaarxiv icon

Density Weighting for Multi-Interest Personalized Recommendation

Add code
Aug 03, 2023
Figure 1 for Density Weighting for Multi-Interest Personalized Recommendation
Figure 2 for Density Weighting for Multi-Interest Personalized Recommendation
Figure 3 for Density Weighting for Multi-Interest Personalized Recommendation
Figure 4 for Density Weighting for Multi-Interest Personalized Recommendation
Viaarxiv icon

Online Matching: A Real-time Bandit System for Large-scale Recommendations

Add code
Jul 29, 2023
Figure 1 for Online Matching: A Real-time Bandit System for Large-scale Recommendations
Figure 2 for Online Matching: A Real-time Bandit System for Large-scale Recommendations
Figure 3 for Online Matching: A Real-time Bandit System for Large-scale Recommendations
Figure 4 for Online Matching: A Real-time Bandit System for Large-scale Recommendations
Viaarxiv icon