Picture for Pan Li

Pan Li

Towards A Universal Graph Structural Encoder

Add code
Apr 15, 2025
Viaarxiv icon

MatterTune: An Integrated, User-Friendly Platform for Fine-Tuning Atomistic Foundation Models to Accelerate Materials Simulation and Discovery

Add code
Apr 14, 2025
Viaarxiv icon

Automating Personalization: Prompt Optimization for Recommendation Reranking

Add code
Apr 04, 2025
Viaarxiv icon

Structural Alignment Improves Graph Test-Time Adaptation

Add code
Feb 25, 2025
Viaarxiv icon

The Blessing of Reasoning: LLM-Based Contrastive Explanations in Black-Box Recommender Systems

Add code
Feb 24, 2025
Viaarxiv icon

Model Generalization on Text Attribute Graphs: Principles with Large Language Models

Add code
Feb 17, 2025
Viaarxiv icon

Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding

Add code
Jan 01, 2025
Figure 1 for Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding
Figure 2 for Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding
Figure 3 for Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding
Figure 4 for Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding
Viaarxiv icon

Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing

Add code
Dec 31, 2024
Figure 1 for Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
Figure 2 for Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
Figure 3 for Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
Figure 4 for Understanding and Mitigating Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing
Viaarxiv icon

Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning

Add code
Dec 11, 2024
Viaarxiv icon

Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models

Add code
Nov 25, 2024
Figure 1 for Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models
Figure 2 for Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models
Figure 3 for Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models
Figure 4 for Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models
Viaarxiv icon