Picture for Chung-Ching Lin

Chung-Ching Lin

ViCrit: A Verifiable Reinforcement Learning Proxy Task for Visual Perception in VLMs

Add code
Jun 11, 2025
Viaarxiv icon

Audio-Aware Large Language Models as Judges for Speaking Styles

Add code
Jun 06, 2025
Viaarxiv icon

Point-RFT: Improving Multimodal Reasoning with Visually Grounded Reinforcement Finetuning

Add code
May 26, 2025
Viaarxiv icon

SoTA with Less: MCTS-Guided Sample Selection for Data-Efficient Visual Reasoning Self-Improvement

Add code
Apr 10, 2025
Viaarxiv icon

Measurement of LLM's Philosophies of Human Nature

Add code
Apr 03, 2025
Viaarxiv icon

Zero-Shot Audio-Visual Editing via Cross-Modal Delta Denoising

Add code
Mar 26, 2025
Viaarxiv icon

GenXD: Generating Any 3D and 4D Scenes

Add code
Nov 05, 2024
Figure 1 for GenXD: Generating Any 3D and 4D Scenes
Figure 2 for GenXD: Generating Any 3D and 4D Scenes
Figure 3 for GenXD: Generating Any 3D and 4D Scenes
Figure 4 for GenXD: Generating Any 3D and 4D Scenes
Viaarxiv icon

SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation

Add code
Oct 30, 2024
Figure 1 for SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation
Figure 2 for SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation
Figure 3 for SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation
Figure 4 for SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation
Viaarxiv icon

MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated Capabilities

Add code
Aug 01, 2024
Figure 1 for MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated Capabilities
Figure 2 for MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated Capabilities
Figure 3 for MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated Capabilities
Figure 4 for MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated Capabilities
Viaarxiv icon

IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation

Add code
Jul 15, 2024
Figure 1 for IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation
Figure 2 for IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation
Figure 3 for IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation
Figure 4 for IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation
Viaarxiv icon