Picture for Hansu Gu

Hansu Gu

Bidirectional Knowledge Distillation for Enhancing Sequential Recommendation with Large Language Models

Add code
May 23, 2025
Viaarxiv icon

EcomScriptBench: A Multi-task Benchmark for E-commerce Script Planning via Step-wise Intention-Driven Product Association

Add code
May 21, 2025
Viaarxiv icon

FedCIA: Federated Collaborative Information Aggregation for Privacy-Preserving Recommendation

Add code
Apr 19, 2025
Viaarxiv icon

UXAgent: A System for Simulating Usability Testing of Web Design with LLM Agents

Add code
Apr 13, 2025
Viaarxiv icon

Enhancing LLM-Based Recommendations Through Personalized Reasoning

Add code
Feb 19, 2025
Viaarxiv icon

Mitigating Popularity Bias in Collaborative Filtering through Fair Sampling

Add code
Feb 19, 2025
Viaarxiv icon

Enhancing Cross-Domain Recommendations with Memory-Optimized LLM-Based User Agents

Add code
Feb 19, 2025
Viaarxiv icon

UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design

Add code
Feb 18, 2025
Figure 1 for UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design
Figure 2 for UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design
Figure 3 for UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design
Figure 4 for UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design
Viaarxiv icon

Oracle-guided Dynamic User Preference Modeling for Sequential Recommendation

Add code
Dec 01, 2024
Figure 1 for Oracle-guided Dynamic User Preference Modeling for Sequential Recommendation
Figure 2 for Oracle-guided Dynamic User Preference Modeling for Sequential Recommendation
Figure 3 for Oracle-guided Dynamic User Preference Modeling for Sequential Recommendation
Figure 4 for Oracle-guided Dynamic User Preference Modeling for Sequential Recommendation
Viaarxiv icon

Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data

Add code
Nov 12, 2024
Figure 1 for Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data
Figure 2 for Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data
Figure 3 for Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data
Figure 4 for Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data
Viaarxiv icon