Picture for Zhuoran Yang

Zhuoran Yang

Active Advantage-Aligned Online Reinforcement Learning with Offline Data

Add code
Feb 11, 2025
Viaarxiv icon

Learning Task Representations from In-Context Learning

Add code
Feb 08, 2025
Figure 1 for Learning Task Representations from In-Context Learning
Figure 2 for Learning Task Representations from In-Context Learning
Figure 3 for Learning Task Representations from In-Context Learning
Figure 4 for Learning Task Representations from In-Context Learning
Viaarxiv icon

An Instrumental Value for Data Production and its Application to Data Pricing

Add code
Dec 24, 2024
Viaarxiv icon

Enhancing Multi-Text Long Video Generation Consistency without Tuning: Time-Frequency Analysis, Prompt Alignment, and Theory

Add code
Dec 23, 2024
Figure 1 for Enhancing Multi-Text Long Video Generation Consistency without Tuning: Time-Frequency Analysis, Prompt Alignment, and Theory
Figure 2 for Enhancing Multi-Text Long Video Generation Consistency without Tuning: Time-Frequency Analysis, Prompt Alignment, and Theory
Figure 3 for Enhancing Multi-Text Long Video Generation Consistency without Tuning: Time-Frequency Analysis, Prompt Alignment, and Theory
Figure 4 for Enhancing Multi-Text Long Video Generation Consistency without Tuning: Time-Frequency Analysis, Prompt Alignment, and Theory
Viaarxiv icon

Physical Informed Driving World Model

Add code
Dec 13, 2024
Figure 1 for Physical Informed Driving World Model
Figure 2 for Physical Informed Driving World Model
Figure 3 for Physical Informed Driving World Model
Figure 4 for Physical Informed Driving World Model
Viaarxiv icon

Pysical Informed Driving World Model

Add code
Dec 11, 2024
Figure 1 for Pysical Informed Driving World Model
Figure 2 for Pysical Informed Driving World Model
Figure 3 for Pysical Informed Driving World Model
Figure 4 for Pysical Informed Driving World Model
Viaarxiv icon

AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information?

Add code
Dec 03, 2024
Figure 1 for AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information?
Figure 2 for AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information?
Figure 3 for AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information?
Figure 4 for AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information?
Viaarxiv icon

Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods

Add code
Aug 25, 2024
Figure 1 for Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods
Figure 2 for Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods
Figure 3 for Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods
Figure 4 for Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods
Viaarxiv icon

Provable Statistical Rates for Consistency Diffusion Models

Add code
Jun 23, 2024
Viaarxiv icon

From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems

Add code
May 30, 2024
Figure 1 for From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems
Figure 2 for From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems
Figure 3 for From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems
Viaarxiv icon