Picture for Rong Jin

Rong Jin

Michigan State University

Cross-attention Secretly Performs Orthogonal Alignment in Recommendation Models

Add code
Oct 10, 2025
Viaarxiv icon

Bridging Past and Future: Distribution-Aware Alignment for Time Series Forecasting

Add code
Sep 17, 2025
Viaarxiv icon

Utilizing Strategic Pre-training to Reduce Overfitting: Baguan -- A Pre-trained Weather Forecasting Model

Add code
May 20, 2025
Viaarxiv icon

Enhancing Embedding Representation Stability in Recommendation Systems with Semantic ID

Add code
Apr 02, 2025
Viaarxiv icon

External Large Foundation Model: How to Efficiently Serve Trillions of Parameters for Online Ads Recommendation

Add code
Feb 26, 2025
Figure 1 for External Large Foundation Model: How to Efficiently Serve Trillions of Parameters for Online Ads Recommendation
Figure 2 for External Large Foundation Model: How to Efficiently Serve Trillions of Parameters for Online Ads Recommendation
Figure 3 for External Large Foundation Model: How to Efficiently Serve Trillions of Parameters for Online Ads Recommendation
Figure 4 for External Large Foundation Model: How to Efficiently Serve Trillions of Parameters for Online Ads Recommendation
Viaarxiv icon

Maximizing the Impact of Deep Learning on Subseasonal-to-Seasonal Climate Forecasting: The Essential Role of Optimization

Add code
Nov 23, 2024
Viaarxiv icon

A Collaborative Ensemble Framework for CTR Prediction

Add code
Nov 20, 2024
Figure 1 for A Collaborative Ensemble Framework for CTR Prediction
Figure 2 for A Collaborative Ensemble Framework for CTR Prediction
Figure 3 for A Collaborative Ensemble Framework for CTR Prediction
Figure 4 for A Collaborative Ensemble Framework for CTR Prediction
Viaarxiv icon

$α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs

Add code
Oct 14, 2024
Figure 1 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 2 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 3 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 4 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Viaarxiv icon

Mitigating Time Discretization Challenges with WeatherODE: A Sandwich Physics-Driven Neural ODE for Weather Forecasting

Add code
Oct 09, 2024
Viaarxiv icon

MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?

Add code
Aug 23, 2024
Figure 1 for MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?
Figure 2 for MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?
Figure 3 for MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?
Figure 4 for MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?
Viaarxiv icon