Picture for Senqiao Yang

Senqiao Yang

VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning

Add code
Jul 17, 2025
Viaarxiv icon

Mitigating Object Hallucinations via Sentence-Level Early Intervention

Add code
Jul 16, 2025
Viaarxiv icon

Omni-DPO: A Dual-Perspective Paradigm for Dynamic Preference Learning of LLMs

Add code
Jun 11, 2025
Viaarxiv icon

Logits-Based Finetuning

Add code
May 30, 2025
Viaarxiv icon

Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma?

Add code
Mar 16, 2025
Viaarxiv icon

Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition

Add code
Dec 12, 2024
Figure 1 for Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition
Figure 2 for Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition
Figure 3 for Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition
Figure 4 for Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition
Viaarxiv icon

VisionZip: Longer is Better but Not Necessary in Vision Language Models

Add code
Dec 05, 2024
Figure 1 for VisionZip: Longer is Better but Not Necessary in Vision Language Models
Figure 2 for VisionZip: Longer is Better but Not Necessary in Vision Language Models
Figure 3 for VisionZip: Longer is Better but Not Necessary in Vision Language Models
Figure 4 for VisionZip: Longer is Better but Not Necessary in Vision Language Models
Viaarxiv icon

Typicalness-Aware Learning for Failure Detection

Add code
Nov 04, 2024
Figure 1 for Typicalness-Aware Learning for Failure Detection
Figure 2 for Typicalness-Aware Learning for Failure Detection
Figure 3 for Typicalness-Aware Learning for Failure Detection
Figure 4 for Typicalness-Aware Learning for Failure Detection
Viaarxiv icon

Impacts of Darwinian Evolution on Pre-trained Deep Neural Networks

Add code
Aug 10, 2024
Viaarxiv icon

Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs

Add code
Jun 26, 2024
Figure 1 for Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs
Figure 2 for Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs
Figure 3 for Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs
Figure 4 for Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs
Viaarxiv icon