Picture for Yi Chang

Yi Chang

Jilin University

HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs

Add code
Aug 06, 2025
Figure 1 for HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs
Figure 2 for HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs
Figure 3 for HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs
Figure 4 for HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs
Viaarxiv icon

Can Large Multimodal Models Actively Recognize Faulty Inputs? A Systematic Evaluation Framework of Their Input Scrutiny Ability

Add code
Aug 06, 2025
Figure 1 for Can Large Multimodal Models Actively Recognize Faulty Inputs? A Systematic Evaluation Framework of Their Input Scrutiny Ability
Figure 2 for Can Large Multimodal Models Actively Recognize Faulty Inputs? A Systematic Evaluation Framework of Their Input Scrutiny Ability
Figure 3 for Can Large Multimodal Models Actively Recognize Faulty Inputs? A Systematic Evaluation Framework of Their Input Scrutiny Ability
Figure 4 for Can Large Multimodal Models Actively Recognize Faulty Inputs? A Systematic Evaluation Framework of Their Input Scrutiny Ability
Viaarxiv icon

ConfProBench: A Confidence Evaluation Benchmark for MLLM-Based Process Judges

Add code
Aug 06, 2025
Figure 1 for ConfProBench: A Confidence Evaluation Benchmark for MLLM-Based Process Judges
Figure 2 for ConfProBench: A Confidence Evaluation Benchmark for MLLM-Based Process Judges
Figure 3 for ConfProBench: A Confidence Evaluation Benchmark for MLLM-Based Process Judges
Figure 4 for ConfProBench: A Confidence Evaluation Benchmark for MLLM-Based Process Judges
Viaarxiv icon

Rethinking Discrete Tokens: Treating Them as Conditions for Continuous Autoregressive Image Synthesis

Add code
Jul 02, 2025
Viaarxiv icon

Training-free LLM Merging for Multi-task Learning

Add code
Jun 14, 2025
Figure 1 for Training-free LLM Merging for Multi-task Learning
Figure 2 for Training-free LLM Merging for Multi-task Learning
Figure 3 for Training-free LLM Merging for Multi-task Learning
Figure 4 for Training-free LLM Merging for Multi-task Learning
Viaarxiv icon

A Survey of Retentive Network

Add code
Jun 07, 2025
Viaarxiv icon

Don't Take the Premise for Granted: Evaluating the Premise Critique Ability of Large Language Models

Add code
May 29, 2025
Viaarxiv icon

THINK-Bench: Evaluating Thinking Efficiency and Chain-of-Thought Quality of Large Reasoning Models

Add code
May 28, 2025
Figure 1 for THINK-Bench: Evaluating Thinking Efficiency and Chain-of-Thought Quality of Large Reasoning Models
Figure 2 for THINK-Bench: Evaluating Thinking Efficiency and Chain-of-Thought Quality of Large Reasoning Models
Figure 3 for THINK-Bench: Evaluating Thinking Efficiency and Chain-of-Thought Quality of Large Reasoning Models
Figure 4 for THINK-Bench: Evaluating Thinking Efficiency and Chain-of-Thought Quality of Large Reasoning Models
Viaarxiv icon

Decision Flow Policy Optimization

Add code
May 26, 2025
Viaarxiv icon

ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World

Add code
May 25, 2025
Figure 1 for ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World
Figure 2 for ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World
Figure 3 for ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World
Figure 4 for ScreenExplorer: Training a Vision-Language Model for Diverse Exploration in Open GUI World
Viaarxiv icon