Picture for Qiaozhi He

Qiaozhi He

GRAM: A Generative Foundation Reward Model for Reward Generalization

Add code
Jun 18, 2025
Viaarxiv icon

StickMotion: Generating 3D Human Motions by Drawing a Stickman

Add code
Mar 05, 2025
Viaarxiv icon

Boosting Text-To-Image Generation via Multilingual Prompting in Large Multimodal Models

Add code
Jan 13, 2025
Viaarxiv icon

LRHP: Learning Representations for Human Preferences via Preference Pairs

Add code
Oct 06, 2024
Figure 1 for LRHP: Learning Representations for Human Preferences via Preference Pairs
Figure 2 for LRHP: Learning Representations for Human Preferences via Preference Pairs
Figure 3 for LRHP: Learning Representations for Human Preferences via Preference Pairs
Figure 4 for LRHP: Learning Representations for Human Preferences via Preference Pairs
Viaarxiv icon

RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data

Add code
Aug 22, 2024
Figure 1 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Figure 2 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Figure 3 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Figure 4 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Viaarxiv icon

Cross-layer Attention Sharing for Large Language Models

Add code
Aug 04, 2024
Figure 1 for Cross-layer Attention Sharing for Large Language Models
Figure 2 for Cross-layer Attention Sharing for Large Language Models
Figure 3 for Cross-layer Attention Sharing for Large Language Models
Figure 4 for Cross-layer Attention Sharing for Large Language Models
Viaarxiv icon

ChuXin: 1.6B Technical Report

Add code
May 08, 2024
Figure 1 for ChuXin: 1.6B Technical Report
Figure 2 for ChuXin: 1.6B Technical Report
Figure 3 for ChuXin: 1.6B Technical Report
Figure 4 for ChuXin: 1.6B Technical Report
Viaarxiv icon

Efficient LLM Inference with Kcache

Add code
Apr 28, 2024
Figure 1 for Efficient LLM Inference with Kcache
Figure 2 for Efficient LLM Inference with Kcache
Figure 3 for Efficient LLM Inference with Kcache
Figure 4 for Efficient LLM Inference with Kcache
Viaarxiv icon

Code Comparison Tuning for Code Large Language Models

Add code
Mar 28, 2024
Figure 1 for Code Comparison Tuning for Code Large Language Models
Figure 2 for Code Comparison Tuning for Code Large Language Models
Figure 3 for Code Comparison Tuning for Code Large Language Models
Figure 4 for Code Comparison Tuning for Code Large Language Models
Viaarxiv icon

RecycleGPT: An Autoregressive Language Model with Recyclable Module

Add code
Aug 08, 2023
Viaarxiv icon