Alert button
Picture for Richard L. Lewis

Richard L. Lewis

Alert button

Combining Behaviors with the Successor Features Keyboard

Add code
Bookmark button
Alert button
Oct 24, 2023
Wilka Carvalho, Andre Saraiva, Angelos Filos, Andrew Kyle Lampinen, Loic Matthey, Richard L. Lewis, Honglak Lee, Satinder Singh, Danilo J. Rezende, Daniel Zoran

Figure 1 for Combining Behaviors with the Successor Features Keyboard
Figure 2 for Combining Behaviors with the Successor Features Keyboard
Figure 3 for Combining Behaviors with the Successor Features Keyboard
Figure 4 for Combining Behaviors with the Successor Features Keyboard
Viaarxiv icon

In-Context Analogical Reasoning with Pre-Trained Language Models

Add code
Bookmark button
Alert button
Jun 05, 2023
Xiaoyang Hu, Shane Storks, Richard L. Lewis, Joyce Chai

Figure 1 for In-Context Analogical Reasoning with Pre-Trained Language Models
Figure 2 for In-Context Analogical Reasoning with Pre-Trained Language Models
Figure 3 for In-Context Analogical Reasoning with Pre-Trained Language Models
Figure 4 for In-Context Analogical Reasoning with Pre-Trained Language Models
Viaarxiv icon

Composing Task Knowledge with Modular Successor Feature Approximators

Add code
Bookmark button
Alert button
Jan 28, 2023
Wilka Carvalho, Angelos Filos, Richard L. Lewis, Honglak lee, Satinder Singh

Figure 1 for Composing Task Knowledge with Modular Successor Feature Approximators
Figure 2 for Composing Task Knowledge with Modular Successor Feature Approximators
Figure 3 for Composing Task Knowledge with Modular Successor Feature Approximators
Figure 4 for Composing Task Knowledge with Modular Successor Feature Approximators
Viaarxiv icon

In-Context Policy Iteration

Add code
Bookmark button
Alert button
Oct 07, 2022
Ethan Brooks, Logan Walls, Richard L. Lewis, Satinder Singh

Figure 1 for In-Context Policy Iteration
Figure 2 for In-Context Policy Iteration
Figure 3 for In-Context Policy Iteration
Figure 4 for In-Context Policy Iteration
Viaarxiv icon

Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention

Add code
Bookmark button
Alert button
Apr 26, 2021
Soo Hyun Ryu, Richard L. Lewis

Figure 1 for Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
Figure 2 for Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
Figure 3 for Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
Figure 4 for Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
Viaarxiv icon

Reinforcement Learning of Implicit and Explicit Control Flow in Instructions

Add code
Bookmark button
Alert button
Feb 25, 2021
Ethan A. Brooks, Janarthanan Rajendran, Richard L. Lewis, Satinder Singh

Figure 1 for Reinforcement Learning of Implicit and Explicit Control Flow in Instructions
Figure 2 for Reinforcement Learning of Implicit and Explicit Control Flow in Instructions
Figure 3 for Reinforcement Learning of Implicit and Explicit Control Flow in Instructions
Figure 4 for Reinforcement Learning of Implicit and Explicit Control Flow in Instructions
Viaarxiv icon

Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments

Add code
Bookmark button
Alert button
Oct 28, 2020
Wilka Carvalho, Anthony Liang, Kimin Lee, Sungryull Sohn, Honglak Lee, Richard L. Lewis, Satinder Singh

Figure 1 for Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments
Figure 2 for Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments
Figure 3 for Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments
Figure 4 for Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments
Viaarxiv icon

Variance-Based Rewards for Approximate Bayesian Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 15, 2012
Jonathan Sorg, Satinder Singh, Richard L. Lewis

Figure 1 for Variance-Based Rewards for Approximate Bayesian Reinforcement Learning
Figure 2 for Variance-Based Rewards for Approximate Bayesian Reinforcement Learning
Figure 3 for Variance-Based Rewards for Approximate Bayesian Reinforcement Learning
Figure 4 for Variance-Based Rewards for Approximate Bayesian Reinforcement Learning
Viaarxiv icon