Alert button
Picture for Emma Brunskill

Emma Brunskill

Alert button

Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning

Aug 19, 2021
Andrea Zanette, Martin J. Wainwright, Emma Brunskill

Viaarxiv icon

On the Opportunities and Risks of Foundation Models

Aug 18, 2021
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Kohd, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

Figure 1 for On the Opportunities and Risks of Foundation Models
Figure 2 for On the Opportunities and Risks of Foundation Models
Figure 3 for On the Opportunities and Risks of Foundation Models
Figure 4 for On the Opportunities and Risks of Foundation Models
Viaarxiv icon

Design of Experiments for Stochastic Contextual Linear Bandits

Jul 22, 2021
Andrea Zanette, Kefan Dong, Jonathan Lee, Emma Brunskill

Figure 1 for Design of Experiments for Stochastic Contextual Linear Bandits
Figure 2 for Design of Experiments for Stochastic Contextual Linear Bandits
Figure 3 for Design of Experiments for Stochastic Contextual Linear Bandits
Figure 4 for Design of Experiments for Stochastic Contextual Linear Bandits
Viaarxiv icon

Universal Off-Policy Evaluation

Apr 26, 2021
Yash Chandak, Scott Niekum, Bruno Castro da Silva, Erik Learned-Miller, Emma Brunskill, Philip S. Thomas

Figure 1 for Universal Off-Policy Evaluation
Figure 2 for Universal Off-Policy Evaluation
Figure 3 for Universal Off-Policy Evaluation
Figure 4 for Universal Off-Policy Evaluation
Viaarxiv icon

Online Model Selection for Reinforcement Learning with Function Approximation

Nov 19, 2020
Jonathan N. Lee, Aldo Pacchiano, Vidya Muthukumar, Weihao Kong, Emma Brunskill

Figure 1 for Online Model Selection for Reinforcement Learning with Function Approximation
Viaarxiv icon

Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration

Aug 18, 2020
Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill

Figure 1 for Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration
Figure 2 for Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration
Viaarxiv icon

Provably Good Batch Reinforcement Learning Without Great Exploration

Jul 22, 2020
Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill

Figure 1 for Provably Good Batch Reinforcement Learning Without Great Exploration
Figure 2 for Provably Good Batch Reinforcement Learning Without Great Exploration
Figure 3 for Provably Good Batch Reinforcement Learning Without Great Exploration
Figure 4 for Provably Good Batch Reinforcement Learning Without Great Exploration
Viaarxiv icon