Picture for Yongfei Liu

Yongfei Liu

Incremental Human-Object Interaction Detection with Invariant Relation Representation Learning

Add code
Oct 30, 2025
Figure 1 for Incremental Human-Object Interaction Detection with Invariant Relation Representation Learning
Figure 2 for Incremental Human-Object Interaction Detection with Invariant Relation Representation Learning
Figure 3 for Incremental Human-Object Interaction Detection with Invariant Relation Representation Learning
Figure 4 for Incremental Human-Object Interaction Detection with Invariant Relation Representation Learning
Viaarxiv icon

Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models

Add code
Aug 01, 2025
Figure 1 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Figure 2 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Figure 3 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Figure 4 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Viaarxiv icon

DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs

Add code
Nov 20, 2024
Figure 1 for DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs
Figure 2 for DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs
Figure 3 for DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs
Figure 4 for DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs
Viaarxiv icon

Reward-Augmented Data Enhances Direct Preference Alignment of LLMs

Add code
Oct 10, 2024
Figure 1 for Reward-Augmented Data Enhances Direct Preference Alignment of LLMs
Figure 2 for Reward-Augmented Data Enhances Direct Preference Alignment of LLMs
Figure 3 for Reward-Augmented Data Enhances Direct Preference Alignment of LLMs
Figure 4 for Reward-Augmented Data Enhances Direct Preference Alignment of LLMs
Viaarxiv icon

Just say what you want: only-prompting self-rewarding online preference optimization

Add code
Sep 26, 2024
Figure 1 for Just say what you want: only-prompting self-rewarding online preference optimization
Figure 2 for Just say what you want: only-prompting self-rewarding online preference optimization
Figure 3 for Just say what you want: only-prompting self-rewarding online preference optimization
Figure 4 for Just say what you want: only-prompting self-rewarding online preference optimization
Viaarxiv icon

Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model

Add code
May 28, 2024
Figure 1 for Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model
Figure 2 for Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model
Figure 3 for Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model
Figure 4 for Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model
Viaarxiv icon

ViTAR: Vision Transformer with Any Resolution

Add code
Mar 28, 2024
Figure 1 for ViTAR: Vision Transformer with Any Resolution
Figure 2 for ViTAR: Vision Transformer with Any Resolution
Figure 3 for ViTAR: Vision Transformer with Any Resolution
Figure 4 for ViTAR: Vision Transformer with Any Resolution
Viaarxiv icon

InfiMM-HD: A Leap Forward in High-Resolution Multimodal Understanding

Add code
Mar 03, 2024
Viaarxiv icon

Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning

Add code
Jan 18, 2024
Figure 1 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Figure 2 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Figure 3 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Figure 4 for Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Viaarxiv icon

InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models

Add code
Dec 04, 2023
Figure 1 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Figure 2 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Figure 3 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Figure 4 for InfiMM-Eval: Complex Open-Ended Reasoning Evaluation For Multi-Modal Large Language Models
Viaarxiv icon