Picture for Xiaodan Liang

Xiaodan Liang

HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models

Add code
Jul 11, 2024
Figure 1 for HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models
Figure 2 for HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models
Figure 3 for HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models
Figure 4 for HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models
Viaarxiv icon

OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion

Add code
Jul 10, 2024
Figure 1 for OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion
Figure 2 for OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion
Figure 3 for OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion
Figure 4 for OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion
Viaarxiv icon

HumanRefiner: Benchmarking Abnormal Human Generation and Refining with Coarse-to-fine Pose-Reversible Guidance

Add code
Jul 09, 2024
Figure 1 for HumanRefiner: Benchmarking Abnormal Human Generation and Refining with Coarse-to-fine Pose-Reversible Guidance
Figure 2 for HumanRefiner: Benchmarking Abnormal Human Generation and Refining with Coarse-to-fine Pose-Reversible Guidance
Figure 3 for HumanRefiner: Benchmarking Abnormal Human Generation and Refining with Coarse-to-fine Pose-Reversible Guidance
Figure 4 for HumanRefiner: Benchmarking Abnormal Human Generation and Refining with Coarse-to-fine Pose-Reversible Guidance
Viaarxiv icon

Affordances-Oriented Planning using Foundation Models for Continuous Vision-Language Navigation

Add code
Jul 08, 2024
Figure 1 for Affordances-Oriented Planning using Foundation Models for Continuous Vision-Language Navigation
Figure 2 for Affordances-Oriented Planning using Foundation Models for Continuous Vision-Language Navigation
Figure 3 for Affordances-Oriented Planning using Foundation Models for Continuous Vision-Language Navigation
Figure 4 for Affordances-Oriented Planning using Foundation Models for Continuous Vision-Language Navigation
Viaarxiv icon

Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs

Add code
Jun 28, 2024
Figure 1 for Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs
Figure 2 for Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs
Figure 3 for Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs
Figure 4 for Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs
Viaarxiv icon

FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving

Add code
Jun 20, 2024
Figure 1 for FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving
Figure 2 for FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving
Figure 3 for FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving
Figure 4 for FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving
Viaarxiv icon

Predicting Genetic Mutation from Whole Slide Images via Biomedical-Linguistic Knowledge Enhanced Multi-label Classification

Add code
Jun 05, 2024
Viaarxiv icon

UA-Track: Uncertainty-Aware End-to-End 3D Multi-Object Tracking

Add code
Jun 04, 2024
Figure 1 for UA-Track: Uncertainty-Aware End-to-End 3D Multi-Object Tracking
Figure 2 for UA-Track: Uncertainty-Aware End-to-End 3D Multi-Object Tracking
Figure 3 for UA-Track: Uncertainty-Aware End-to-End 3D Multi-Object Tracking
Figure 4 for UA-Track: Uncertainty-Aware End-to-End 3D Multi-Object Tracking
Viaarxiv icon

AutoStudio: Crafting Consistent Subjects in Multi-turn Interactive Image Generation

Add code
Jun 03, 2024
Viaarxiv icon

Correctable Landmark Discovery via Large Models for Vision-Language Navigation

Add code
May 29, 2024
Figure 1 for Correctable Landmark Discovery via Large Models for Vision-Language Navigation
Figure 2 for Correctable Landmark Discovery via Large Models for Vision-Language Navigation
Figure 3 for Correctable Landmark Discovery via Large Models for Vision-Language Navigation
Figure 4 for Correctable Landmark Discovery via Large Models for Vision-Language Navigation
Viaarxiv icon