Picture for Zhengyi Yang

Zhengyi Yang

OrchANN: A Unified I/O Orchestration Framework for Skewed Out-of-Core Vector Search

Add code
Dec 28, 2025
Viaarxiv icon

Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models

Add code
Aug 01, 2025
Figure 1 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Figure 2 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Figure 3 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Figure 4 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Viaarxiv icon

CLGNN: A Contrastive Learning-based GNN Model for Betweenness Centrality Prediction on Temporal Graphs

Add code
Jun 17, 2025
Viaarxiv icon

Addressing Missing Data Issue for Diffusion-based Recommendation

Add code
May 18, 2025
Viaarxiv icon

Graphy'our Data: Towards End-to-End Modeling, Exploring and Generating Report from Raw Data

Add code
Feb 24, 2025
Figure 1 for Graphy'our Data: Towards End-to-End Modeling, Exploring and Generating Report from Raw Data
Figure 2 for Graphy'our Data: Towards End-to-End Modeling, Exploring and Generating Report from Raw Data
Figure 3 for Graphy'our Data: Towards End-to-End Modeling, Exploring and Generating Report from Raw Data
Viaarxiv icon

$α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs

Add code
Oct 14, 2024
Figure 1 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 2 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 3 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 4 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Viaarxiv icon

$β$-DPO: Direct Preference Optimization with Dynamic $β$

Add code
Jul 11, 2024
Viaarxiv icon

Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization

Add code
Jul 10, 2024
Figure 1 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 2 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 3 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 4 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Viaarxiv icon

On Softmax Direct Preference Optimization for Recommendation

Add code
Jun 14, 2024
Figure 1 for On Softmax Direct Preference Optimization for Recommendation
Figure 2 for On Softmax Direct Preference Optimization for Recommendation
Figure 3 for On Softmax Direct Preference Optimization for Recommendation
Figure 4 for On Softmax Direct Preference Optimization for Recommendation
Viaarxiv icon

Item-side Fairness of Large Language Model-based Recommendation System

Add code
Feb 23, 2024
Figure 1 for Item-side Fairness of Large Language Model-based Recommendation System
Figure 2 for Item-side Fairness of Large Language Model-based Recommendation System
Figure 3 for Item-side Fairness of Large Language Model-based Recommendation System
Figure 4 for Item-side Fairness of Large Language Model-based Recommendation System
Viaarxiv icon