Picture for Liqun Liu

Liqun Liu

DSH-Bench: A Difficulty- and Scenario-Aware Benchmark with Hierarchical Subject Taxonomy for Subject-Driven Text-to-Image Generation

Add code
Mar 09, 2026
Viaarxiv icon

Search-P1: Path-Centric Reward Shaping for Stable and Efficient Agentic RAG Training

Add code
Feb 26, 2026
Viaarxiv icon

Towards Faithful Industrial RAG: A Reinforced Co-adaptation Framework for Advertising QA

Add code
Feb 26, 2026
Viaarxiv icon

AD-Bench: A Real-World, Trajectory-Aware Advertising Analytics Benchmark for LLM Agents

Add code
Feb 15, 2026
Viaarxiv icon

Multi-Agent VLMs Guided Self-Training with PNU Loss for Low-Resource Offensive Content Detection

Add code
Nov 14, 2025
Figure 1 for Multi-Agent VLMs Guided Self-Training with PNU Loss for Low-Resource Offensive Content Detection
Figure 2 for Multi-Agent VLMs Guided Self-Training with PNU Loss for Low-Resource Offensive Content Detection
Figure 3 for Multi-Agent VLMs Guided Self-Training with PNU Loss for Low-Resource Offensive Content Detection
Figure 4 for Multi-Agent VLMs Guided Self-Training with PNU Loss for Low-Resource Offensive Content Detection
Viaarxiv icon

Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors

Add code
Jun 03, 2024
Figure 1 for Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors
Figure 2 for Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors
Figure 3 for Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors
Figure 4 for Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors
Viaarxiv icon

Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural Language Understanding

Add code
May 30, 2024
Figure 1 for Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural Language Understanding
Figure 2 for Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural Language Understanding
Figure 3 for Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural Language Understanding
Figure 4 for Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural Language Understanding
Viaarxiv icon

TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities

Add code
Dec 13, 2022
Figure 1 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Figure 2 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Figure 3 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Figure 4 for TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Viaarxiv icon

Mixture of Virtual-Kernel Experts for Multi-Objective User Profile Modeling

Add code
Jun 04, 2021
Figure 1 for Mixture of Virtual-Kernel Experts for Multi-Objective User Profile Modeling
Figure 2 for Mixture of Virtual-Kernel Experts for Multi-Objective User Profile Modeling
Figure 3 for Mixture of Virtual-Kernel Experts for Multi-Objective User Profile Modeling
Figure 4 for Mixture of Virtual-Kernel Experts for Multi-Objective User Profile Modeling
Viaarxiv icon

Keyphrase Extraction with Span-based Feature Representations

Add code
Feb 13, 2020
Figure 1 for Keyphrase Extraction with Span-based Feature Representations
Figure 2 for Keyphrase Extraction with Span-based Feature Representations
Figure 3 for Keyphrase Extraction with Span-based Feature Representations
Figure 4 for Keyphrase Extraction with Span-based Feature Representations
Viaarxiv icon