Picture for Nan Jiang

Nan Jiang

Faculty of Information Technology, Beijing University of Technology, Beijing, China, Beijing Key Laboratory of Trusted Computing, Beijing, China, National Engineering Laboratory for Critical Technologies of Information Security Classified Protection, Beijing, China

STMGF: An Effective Spatial-Temporal Multi-Granularity Framework for Traffic Forecasting

Add code
Apr 08, 2024
Figure 1 for STMGF: An Effective Spatial-Temporal Multi-Granularity Framework for Traffic Forecasting
Figure 2 for STMGF: An Effective Spatial-Temporal Multi-Granularity Framework for Traffic Forecasting
Figure 3 for STMGF: An Effective Spatial-Temporal Multi-Granularity Framework for Traffic Forecasting
Figure 4 for STMGF: An Effective Spatial-Temporal Multi-Granularity Framework for Traffic Forecasting
Viaarxiv icon

RouterBench: A Benchmark for Multi-LLM Routing System

Add code
Mar 28, 2024
Figure 1 for RouterBench: A Benchmark for Multi-LLM Routing System
Figure 2 for RouterBench: A Benchmark for Multi-LLM Routing System
Figure 3 for RouterBench: A Benchmark for Multi-LLM Routing System
Figure 4 for RouterBench: A Benchmark for Multi-LLM Routing System
Viaarxiv icon

Towards Effective Next POI Prediction: Spatial and Semantic Augmentation with Remote Sensing Data

Add code
Mar 22, 2024
Figure 1 for Towards Effective Next POI Prediction: Spatial and Semantic Augmentation with Remote Sensing Data
Figure 2 for Towards Effective Next POI Prediction: Spatial and Semantic Augmentation with Remote Sensing Data
Figure 3 for Towards Effective Next POI Prediction: Spatial and Semantic Augmentation with Remote Sensing Data
Figure 4 for Towards Effective Next POI Prediction: Spatial and Semantic Augmentation with Remote Sensing Data
Viaarxiv icon

Scaling Up Dynamic Human-Scene Interaction Modeling

Add code
Mar 13, 2024
Viaarxiv icon

On the Curses of Future and History in Future-dependent Value Functions for Off-policy Evaluation

Add code
Feb 22, 2024
Viaarxiv icon

A Theoretical Analysis of Nash Learning from Human Feedback under General KL-Regularized Preference

Add code
Feb 11, 2024
Figure 1 for A Theoretical Analysis of Nash Learning from Human Feedback under General KL-Regularized Preference
Figure 2 for A Theoretical Analysis of Nash Learning from Human Feedback under General KL-Regularized Preference
Figure 3 for A Theoretical Analysis of Nash Learning from Human Feedback under General KL-Regularized Preference
Figure 4 for A Theoretical Analysis of Nash Learning from Human Feedback under General KL-Regularized Preference
Viaarxiv icon

Vertical Symbolic Regression via Deep Policy Gradient

Add code
Feb 01, 2024
Viaarxiv icon

Harnessing Density Ratios for Online Reinforcement Learning

Add code
Jan 18, 2024
Viaarxiv icon

Vertical Symbolic Regression

Add code
Dec 19, 2023
Viaarxiv icon

Gibbs Sampling from Human Feedback: A Provable KL- constrained Framework for RLHF

Add code
Dec 18, 2023
Figure 1 for Gibbs Sampling from Human Feedback: A Provable KL- constrained Framework for RLHF
Figure 2 for Gibbs Sampling from Human Feedback: A Provable KL- constrained Framework for RLHF
Figure 3 for Gibbs Sampling from Human Feedback: A Provable KL- constrained Framework for RLHF
Figure 4 for Gibbs Sampling from Human Feedback: A Provable KL- constrained Framework for RLHF
Viaarxiv icon