Alert button
Picture for Justin Fu

Justin Fu

Alert button

Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research

Oct 12, 2023
Cole Gulino, Justin Fu, Wenjie Luo, George Tucker, Eli Bronstein, Yiren Lu, Jean Harb, Xinlei Pan, Yan Wang, Xiangyu Chen, John D. Co-Reyes, Rishabh Agarwal, Rebecca Roelofs, Yao Lu, Nico Montali, Paul Mougin, Zoey Yang, Brandyn White, Aleksandra Faust, Rowan McAllister, Dragomir Anguelov, Benjamin Sapp

Figure 1 for Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research
Figure 2 for Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research
Figure 3 for Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research
Figure 4 for Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research
Viaarxiv icon

Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios

Dec 21, 2022
Yiren Lu, Justin Fu, George Tucker, Xinlei Pan, Eli Bronstein, Becca Roelofs, Benjamin Sapp, Brandyn White, Aleksandra Faust, Shimon Whiteson, Dragomir Anguelov, Sergey Levine

Figure 1 for Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios
Figure 2 for Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios
Figure 3 for Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios
Figure 4 for Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios
Viaarxiv icon

Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving

Oct 18, 2022
Eli Bronstein, Mark Palatucci, Dominik Notz, Brandyn White, Alex Kuefler, Yiren Lu, Supratik Paul, Payam Nikdel, Paul Mougin, Hongge Chen, Justin Fu, Austin Abrams, Punit Shah, Evan Racah, Benjamin Frenkel, Shimon Whiteson, Dragomir Anguelov

Figure 1 for Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving
Figure 2 for Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving
Figure 3 for Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving
Figure 4 for Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving
Viaarxiv icon

Context-Aware Language Modeling for Goal-Oriented Dialogue Systems

Apr 22, 2022
Charlie Snell, Mengjiao Yang, Justin Fu, Yi Su, Sergey Levine

Figure 1 for Context-Aware Language Modeling for Goal-Oriented Dialogue Systems
Figure 2 for Context-Aware Language Modeling for Goal-Oriented Dialogue Systems
Figure 3 for Context-Aware Language Modeling for Goal-Oriented Dialogue Systems
Figure 4 for Context-Aware Language Modeling for Goal-Oriented Dialogue Systems
Viaarxiv icon

CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning

Apr 18, 2022
Siddharth Verma, Justin Fu, Mengjiao Yang, Sergey Levine

Figure 1 for CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning
Figure 2 for CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning
Figure 3 for CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning
Figure 4 for CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning
Viaarxiv icon

Benchmarks for Deep Off-Policy Evaluation

Mar 30, 2021
Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R. Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, Tom Le Paine

Figure 1 for Benchmarks for Deep Off-Policy Evaluation
Figure 2 for Benchmarks for Deep Off-Policy Evaluation
Figure 3 for Benchmarks for Deep Off-Policy Evaluation
Figure 4 for Benchmarks for Deep Off-Policy Evaluation
Viaarxiv icon

Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation

Feb 16, 2021
Justin Fu, Sergey Levine

Figure 1 for Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
Figure 2 for Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
Figure 3 for Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
Figure 4 for Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
Viaarxiv icon

Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems

May 04, 2020
Sergey Levine, Aviral Kumar, George Tucker, Justin Fu

Figure 1 for Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Figure 2 for Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Figure 3 for Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Figure 4 for Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Viaarxiv icon

D4RL: Datasets for Deep Data-Driven Reinforcement Learning

Apr 20, 2020
Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, Sergey Levine

Figure 1 for D4RL: Datasets for Deep Data-Driven Reinforcement Learning
Figure 2 for D4RL: Datasets for Deep Data-Driven Reinforcement Learning
Figure 3 for D4RL: Datasets for Deep Data-Driven Reinforcement Learning
Viaarxiv icon