Picture for Runze Wu

Runze Wu

CrowdAgent: Multi-Agent Managed Multi-Source Annotation System

Add code
Sep 17, 2025
Viaarxiv icon

MVP-Shapley: Feature-based Modeling for Evaluating the Most Valuable Player in Basketball

Add code
Jun 05, 2025
Viaarxiv icon

Fast-DataShapley: Neural Modeling for Training Data Valuation

Add code
Jun 05, 2025
Viaarxiv icon

Empowering Economic Simulation for Massively Multiplayer Online Games through Generative Agent-Based Modeling

Add code
Jun 05, 2025
Viaarxiv icon

Prompt Candidates, then Distill: A Teacher-Student Framework for LLM-driven Data Annotation

Add code
Jun 04, 2025
Viaarxiv icon

Digital Player: Evaluating Large Language Models based Human-like Agent in Games

Add code
Feb 28, 2025
Viaarxiv icon

A Flexible Plug-and-Play Module for Generating Variable-Length

Add code
Dec 12, 2024
Viaarxiv icon

Rank Aggregation in Crowdsourcing for Listwise Annotations

Add code
Oct 10, 2024
Figure 1 for Rank Aggregation in Crowdsourcing for Listwise Annotations
Figure 2 for Rank Aggregation in Crowdsourcing for Listwise Annotations
Figure 3 for Rank Aggregation in Crowdsourcing for Listwise Annotations
Figure 4 for Rank Aggregation in Crowdsourcing for Listwise Annotations
Viaarxiv icon

A Dataset for the Validation of Truth Inference Algorithms Suitable for Online Deployment

Add code
Mar 10, 2024
Figure 1 for A Dataset for the Validation of Truth Inference Algorithms Suitable for Online Deployment
Figure 2 for A Dataset for the Validation of Truth Inference Algorithms Suitable for Online Deployment
Figure 3 for A Dataset for the Validation of Truth Inference Algorithms Suitable for Online Deployment
Figure 4 for A Dataset for the Validation of Truth Inference Algorithms Suitable for Online Deployment
Viaarxiv icon

XRL-Bench: A Benchmark for Evaluating and Comparing Explainable Reinforcement Learning Techniques

Add code
Feb 20, 2024
Figure 1 for XRL-Bench: A Benchmark for Evaluating and Comparing Explainable Reinforcement Learning Techniques
Figure 2 for XRL-Bench: A Benchmark for Evaluating and Comparing Explainable Reinforcement Learning Techniques
Figure 3 for XRL-Bench: A Benchmark for Evaluating and Comparing Explainable Reinforcement Learning Techniques
Figure 4 for XRL-Bench: A Benchmark for Evaluating and Comparing Explainable Reinforcement Learning Techniques
Viaarxiv icon