Alert button
Picture for Alexander Trott

Alexander Trott

Alert button

Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally Inattentive Reinforcement Learning

Jan 18, 2022
Tong Mu, Stephan Zheng, Alexander Trott

Figure 1 for Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally Inattentive Reinforcement Learning
Figure 2 for Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally Inattentive Reinforcement Learning
Figure 3 for Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally Inattentive Reinforcement Learning
Figure 4 for Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally Inattentive Reinforcement Learning
Viaarxiv icon

Finding General Equilibria in Many-Agent Economic Simulations Using Deep Reinforcement Learning

Jan 03, 2022
Michael Curry, Alexander Trott, Soham Phade, Yu Bai, Stephan Zheng

Figure 1 for Finding General Equilibria in Many-Agent Economic Simulations Using Deep Reinforcement Learning
Figure 2 for Finding General Equilibria in Many-Agent Economic Simulations Using Deep Reinforcement Learning
Figure 3 for Finding General Equilibria in Many-Agent Economic Simulations Using Deep Reinforcement Learning
Figure 4 for Finding General Equilibria in Many-Agent Economic Simulations Using Deep Reinforcement Learning
Viaarxiv icon

Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist

Aug 06, 2021
Alexander Trott, Sunil Srinivasa, Douwe van der Wal, Sebastien Haneuse, Stephan Zheng

Figure 1 for Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist
Figure 2 for Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist
Figure 3 for Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist
Figure 4 for Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist
Viaarxiv icon

The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning

Aug 05, 2021
Stephan Zheng, Alexander Trott, Sunil Srinivasa, David C. Parkes, Richard Socher

Figure 1 for The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning
Figure 2 for The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning
Figure 3 for The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning
Figure 4 for The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning
Viaarxiv icon

The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies

Apr 28, 2020
Stephan Zheng, Alexander Trott, Sunil Srinivasa, Nikhil Naik, Melvin Gruesbeck, David C. Parkes, Richard Socher

Figure 1 for The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies
Figure 2 for The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies
Figure 3 for The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies
Figure 4 for The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies
Viaarxiv icon

Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills

Feb 14, 2020
Víctor Campos, Alexander Trott, Caiming Xiong, Richard Socher, Xavier Giro-i-Nieto, Jordi Torres

Figure 1 for Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills
Figure 2 for Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills
Figure 3 for Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills
Figure 4 for Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills
Viaarxiv icon

Keeping Your Distance: Solving Sparse Reward Tasks Using Self-Balancing Shaped Rewards

Nov 04, 2019
Alexander Trott, Stephan Zheng, Caiming Xiong, Richard Socher

Figure 1 for Keeping Your Distance: Solving Sparse Reward Tasks Using Self-Balancing Shaped Rewards
Figure 2 for Keeping Your Distance: Solving Sparse Reward Tasks Using Self-Balancing Shaped Rewards
Figure 3 for Keeping Your Distance: Solving Sparse Reward Tasks Using Self-Balancing Shaped Rewards
Figure 4 for Keeping Your Distance: Solving Sparse Reward Tasks Using Self-Balancing Shaped Rewards
Viaarxiv icon

Competitive Experience Replay

Feb 17, 2019
Hao Liu, Alexander Trott, Richard Socher, Caiming Xiong

Figure 1 for Competitive Experience Replay
Figure 2 for Competitive Experience Replay
Figure 3 for Competitive Experience Replay
Figure 4 for Competitive Experience Replay
Viaarxiv icon