Picture for Deyu Zhou

Deyu Zhou

The Pensieve Paradigm: Stateful Language Models Mastering Their Own Context

Add code
Feb 12, 2026
Viaarxiv icon

Can Vision Replace Text in Working Memory? Evidence from Spatial n-Back in Vision-Language Models

Add code
Feb 04, 2026
Viaarxiv icon

When KV Cache Reuse Fails in Multi-Agent Systems: Cross-Candidate Interaction is Crucial for LLM Judges

Add code
Jan 13, 2026
Viaarxiv icon

Training-Free Text-Guided Color Editing with Multi-Modal Diffusion Transformer

Add code
Aug 12, 2025
Figure 1 for Training-Free Text-Guided Color Editing with Multi-Modal Diffusion Transformer
Figure 2 for Training-Free Text-Guided Color Editing with Multi-Modal Diffusion Transformer
Figure 3 for Training-Free Text-Guided Color Editing with Multi-Modal Diffusion Transformer
Figure 4 for Training-Free Text-Guided Color Editing with Multi-Modal Diffusion Transformer
Viaarxiv icon

Large Language Models Have Intrinsic Meta-Cognition, but Need a Good Lens

Add code
Jun 10, 2025
Viaarxiv icon

SynGraph: A Dynamic Graph-LLM Synthesis Framework for Sparse Streaming User Sentiment Modeling

Add code
Mar 06, 2025
Viaarxiv icon

PROPER: A Progressive Learning Framework for Personalized Large Language Models with Group-Level Adaptation

Add code
Mar 03, 2025
Viaarxiv icon

Explainable Depression Detection in Clinical Interviews with Personalized Retrieval-Augmented Generation

Add code
Mar 03, 2025
Figure 1 for Explainable Depression Detection in Clinical Interviews with Personalized Retrieval-Augmented Generation
Figure 2 for Explainable Depression Detection in Clinical Interviews with Personalized Retrieval-Augmented Generation
Figure 3 for Explainable Depression Detection in Clinical Interviews with Personalized Retrieval-Augmented Generation
Figure 4 for Explainable Depression Detection in Clinical Interviews with Personalized Retrieval-Augmented Generation
Viaarxiv icon

Persuasion Should be Double-Blind: A Multi-Domain Dialogue Dataset With Faithfulness Based on Causal Theory of Mind

Add code
Feb 28, 2025
Figure 1 for Persuasion Should be Double-Blind: A Multi-Domain Dialogue Dataset With Faithfulness Based on Causal Theory of Mind
Figure 2 for Persuasion Should be Double-Blind: A Multi-Domain Dialogue Dataset With Faithfulness Based on Causal Theory of Mind
Figure 3 for Persuasion Should be Double-Blind: A Multi-Domain Dialogue Dataset With Faithfulness Based on Causal Theory of Mind
Figure 4 for Persuasion Should be Double-Blind: A Multi-Domain Dialogue Dataset With Faithfulness Based on Causal Theory of Mind
Viaarxiv icon

Benchmarking Temporal Reasoning and Alignment Across Chinese Dynasties

Add code
Feb 24, 2025
Figure 1 for Benchmarking Temporal Reasoning and Alignment Across Chinese Dynasties
Figure 2 for Benchmarking Temporal Reasoning and Alignment Across Chinese Dynasties
Figure 3 for Benchmarking Temporal Reasoning and Alignment Across Chinese Dynasties
Figure 4 for Benchmarking Temporal Reasoning and Alignment Across Chinese Dynasties
Viaarxiv icon