Picture for Yong Cui

Yong Cui

Sample Efficient Experience Replay in Non-stationary Environments

Add code
Sep 18, 2025
Viaarxiv icon

LEED: A Highly Efficient and Scalable LLM-Empowered Expert Demonstrations Framework for Multi-Agent Reinforcement Learning

Add code
Sep 18, 2025
Viaarxiv icon

KeepKV: Eliminating Output Perturbation in KV Cache Compression for Efficient LLMs Inference

Add code
Apr 14, 2025
Figure 1 for KeepKV: Eliminating Output Perturbation in KV Cache Compression for Efficient LLMs Inference
Figure 2 for KeepKV: Eliminating Output Perturbation in KV Cache Compression for Efficient LLMs Inference
Figure 3 for KeepKV: Eliminating Output Perturbation in KV Cache Compression for Efficient LLMs Inference
Figure 4 for KeepKV: Eliminating Output Perturbation in KV Cache Compression for Efficient LLMs Inference
Viaarxiv icon

Robust Deep Reinforcement Learning in Robotics via Adaptive Gradient-Masked Adversarial Attacks

Add code
Mar 26, 2025
Viaarxiv icon

State-Aware Perturbation Optimization for Robust Deep Reinforcement Learning

Add code
Mar 26, 2025
Viaarxiv icon

LLM-Sketch: Enhancing Network Sketches with LLM

Add code
Feb 11, 2025
Viaarxiv icon

Leveraging LLM Agents for Translating Network Configurations

Add code
Jan 15, 2025
Viaarxiv icon

Rethinking Adversarial Attacks in Reinforcement Learning from Policy Distribution Perspective

Add code
Jan 08, 2025
Viaarxiv icon

Fast Inference for Augmented Large Language Models

Add code
Oct 25, 2024
Figure 1 for Fast Inference for Augmented Large Language Models
Figure 2 for Fast Inference for Augmented Large Language Models
Figure 3 for Fast Inference for Augmented Large Language Models
Figure 4 for Fast Inference for Augmented Large Language Models
Viaarxiv icon

Efficient Inference for Augmented Large Language Models

Add code
Oct 23, 2024
Figure 1 for Efficient Inference for Augmented Large Language Models
Figure 2 for Efficient Inference for Augmented Large Language Models
Figure 3 for Efficient Inference for Augmented Large Language Models
Figure 4 for Efficient Inference for Augmented Large Language Models
Viaarxiv icon