Picture for Eugene Vinitsky

Eugene Vinitsky

Human-compatible driving partners through data-regularized self-play reinforcement learning

Add code
Mar 28, 2024
Figure 1 for Human-compatible driving partners through data-regularized self-play reinforcement learning
Figure 2 for Human-compatible driving partners through data-regularized self-play reinforcement learning
Figure 3 for Human-compatible driving partners through data-regularized self-play reinforcement learning
Figure 4 for Human-compatible driving partners through data-regularized self-play reinforcement learning
Viaarxiv icon

Reinforcement Learning Based Oscillation Dampening: Scaling up Single-Agent RL algorithms to a 100 AV highway field operational test

Add code
Feb 26, 2024
Figure 1 for Reinforcement Learning Based Oscillation Dampening: Scaling up Single-Agent RL algorithms to a 100 AV highway field operational test
Figure 2 for Reinforcement Learning Based Oscillation Dampening: Scaling up Single-Agent RL algorithms to a 100 AV highway field operational test
Figure 3 for Reinforcement Learning Based Oscillation Dampening: Scaling up Single-Agent RL algorithms to a 100 AV highway field operational test
Figure 4 for Reinforcement Learning Based Oscillation Dampening: Scaling up Single-Agent RL algorithms to a 100 AV highway field operational test
Viaarxiv icon

Traffic Smoothing Controllers for Autonomous Vehicles Using Deep Reinforcement Learning and Real-World Trajectory Data

Add code
Jan 18, 2024
Viaarxiv icon

Stabilizing Unsupervised Environment Design with a Learned Adversary

Add code
Aug 22, 2023
Figure 1 for Stabilizing Unsupervised Environment Design with a Learned Adversary
Figure 2 for Stabilizing Unsupervised Environment Design with a Learned Adversary
Figure 3 for Stabilizing Unsupervised Environment Design with a Learned Adversary
Figure 4 for Stabilizing Unsupervised Environment Design with a Learned Adversary
Viaarxiv icon

Unified Automatic Control of Vehicular Systems with Reinforcement Learning

Add code
Jul 30, 2022
Figure 1 for Unified Automatic Control of Vehicular Systems with Reinforcement Learning
Figure 2 for Unified Automatic Control of Vehicular Systems with Reinforcement Learning
Figure 3 for Unified Automatic Control of Vehicular Systems with Reinforcement Learning
Figure 4 for Unified Automatic Control of Vehicular Systems with Reinforcement Learning
Viaarxiv icon

Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world

Add code
Jun 20, 2022
Figure 1 for Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world
Figure 2 for Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world
Figure 3 for Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world
Figure 4 for Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world
Viaarxiv icon

The Surprising Effectiveness of MAPPO in Cooperative, Multi-Agent Games

Add code
Mar 02, 2021
Figure 1 for The Surprising Effectiveness of MAPPO in Cooperative, Multi-Agent Games
Figure 2 for The Surprising Effectiveness of MAPPO in Cooperative, Multi-Agent Games
Figure 3 for The Surprising Effectiveness of MAPPO in Cooperative, Multi-Agent Games
Figure 4 for The Surprising Effectiveness of MAPPO in Cooperative, Multi-Agent Games
Viaarxiv icon

Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design

Add code
Dec 03, 2020
Figure 1 for Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design
Figure 2 for Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design
Figure 3 for Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design
Figure 4 for Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design
Viaarxiv icon

Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous Vehicles and Multi-Agent RL

Add code
Oct 30, 2020
Figure 1 for Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous Vehicles and Multi-Agent RL
Figure 2 for Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous Vehicles and Multi-Agent RL
Figure 3 for Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous Vehicles and Multi-Agent RL
Figure 4 for Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous Vehicles and Multi-Agent RL
Viaarxiv icon

Robust Reinforcement Learning using Adversarial Populations

Add code
Aug 04, 2020
Figure 1 for Robust Reinforcement Learning using Adversarial Populations
Figure 2 for Robust Reinforcement Learning using Adversarial Populations
Figure 3 for Robust Reinforcement Learning using Adversarial Populations
Figure 4 for Robust Reinforcement Learning using Adversarial Populations
Viaarxiv icon