Picture for Junge Zhang

Junge Zhang

VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction

Add code
May 26, 2025
Viaarxiv icon

Generative AI for Autonomous Driving: Frontiers and Opportunities

Add code
May 13, 2025
Viaarxiv icon

Uncertainty-Aware Diffusion Guided Refinement of 3D Scenes

Add code
Mar 19, 2025
Viaarxiv icon

EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning

Add code
Feb 18, 2025
Viaarxiv icon

IDEA-Bench: How Far are Generative Models from Professional Designing?

Add code
Dec 16, 2024
Figure 1 for IDEA-Bench: How Far are Generative Models from Professional Designing?
Figure 2 for IDEA-Bench: How Far are Generative Models from Professional Designing?
Figure 3 for IDEA-Bench: How Far are Generative Models from Professional Designing?
Figure 4 for IDEA-Bench: How Far are Generative Models from Professional Designing?
Viaarxiv icon

Rethinking Generalizability and Discriminability of Self-Supervised Learning from Evolutionary Game Theory Perspective

Add code
Nov 30, 2024
Figure 1 for Rethinking Generalizability and Discriminability of Self-Supervised Learning from Evolutionary Game Theory Perspective
Figure 2 for Rethinking Generalizability and Discriminability of Self-Supervised Learning from Evolutionary Game Theory Perspective
Figure 3 for Rethinking Generalizability and Discriminability of Self-Supervised Learning from Evolutionary Game Theory Perspective
Figure 4 for Rethinking Generalizability and Discriminability of Self-Supervised Learning from Evolutionary Game Theory Perspective
Viaarxiv icon

Uncertainty-aware Reward Model: Teaching Reward Models to Know What is Unknown

Add code
Oct 01, 2024
Viaarxiv icon

Recent Advances in Attack and Defense Approaches of Large Language Models

Add code
Sep 05, 2024
Viaarxiv icon

Position: Foundation Agents as the Paradigm Shift for Decision Making

Add code
May 29, 2024
Viaarxiv icon

SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling

Add code
May 21, 2024
Figure 1 for SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling
Figure 2 for SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling
Figure 3 for SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling
Figure 4 for SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling
Viaarxiv icon