Picture for Xiting Wang

Xiting Wang

Toward Personalized LLM-Powered Agents: Foundations, Evaluation, and Future Directions

Add code
Feb 26, 2026
Viaarxiv icon

Efficient and Stable Reinforcement Learning for Diffusion Language Models

Add code
Feb 09, 2026
Viaarxiv icon

Unlocking Implicit Experience: Synthesizing Tool-Use Trajectories from Text

Add code
Jan 15, 2026
Viaarxiv icon

DPWriter: Reinforcement Learning with Diverse Planning Branching for Creative Writing

Add code
Jan 14, 2026
Viaarxiv icon

Select, Read, and Write: A Multi-Agent Framework of Full-Text-based Related Work Generation

Add code
May 26, 2025
Figure 1 for Select, Read, and Write: A Multi-Agent Framework of Full-Text-based Related Work Generation
Figure 2 for Select, Read, and Write: A Multi-Agent Framework of Full-Text-based Related Work Generation
Figure 3 for Select, Read, and Write: A Multi-Agent Framework of Full-Text-based Related Work Generation
Figure 4 for Select, Read, and Write: A Multi-Agent Framework of Full-Text-based Related Work Generation
Viaarxiv icon

Evaluating Text Creativity across Diverse Domains: A Dataset and Large Language Model Evaluator

Add code
May 25, 2025
Figure 1 for Evaluating Text Creativity across Diverse Domains: A Dataset and Large Language Model Evaluator
Figure 2 for Evaluating Text Creativity across Diverse Domains: A Dataset and Large Language Model Evaluator
Figure 3 for Evaluating Text Creativity across Diverse Domains: A Dataset and Large Language Model Evaluator
Figure 4 for Evaluating Text Creativity across Diverse Domains: A Dataset and Large Language Model Evaluator
Viaarxiv icon

Optimal Transport-Based Token Weighting scheme for Enhanced Preference Optimization

Add code
May 24, 2025
Viaarxiv icon

REWARD CONSISTENCY: Improving Multi-Objective Alignment from a Data-Centric Perspective

Add code
Apr 15, 2025
Figure 1 for REWARD CONSISTENCY: Improving Multi-Objective Alignment from a Data-Centric Perspective
Figure 2 for REWARD CONSISTENCY: Improving Multi-Objective Alignment from a Data-Centric Perspective
Figure 3 for REWARD CONSISTENCY: Improving Multi-Objective Alignment from a Data-Centric Perspective
Figure 4 for REWARD CONSISTENCY: Improving Multi-Objective Alignment from a Data-Centric Perspective
Viaarxiv icon

Entropy-based Exploration Conduction for Multi-step Reasoning

Add code
Mar 20, 2025
Viaarxiv icon

Controlling Large Language Models Through Concept Activation Vectors

Add code
Jan 10, 2025
Figure 1 for Controlling Large Language Models Through Concept Activation Vectors
Figure 2 for Controlling Large Language Models Through Concept Activation Vectors
Figure 3 for Controlling Large Language Models Through Concept Activation Vectors
Figure 4 for Controlling Large Language Models Through Concept Activation Vectors
Viaarxiv icon