Alert button
Picture for Theodore R. Sumers

Theodore R. Sumers

Alert button

Learning with Language-Guided State Abstractions

Add code
Bookmark button
Alert button
Mar 06, 2024
Andi Peng, Ilia Sucholutsky, Belinda Z. Li, Theodore R. Sumers, Thomas L. Griffiths, Jacob Andreas, Julie A. Shah

Viaarxiv icon

How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?

Add code
Bookmark button
Alert button
Feb 13, 2024
Ryan Liu, Theodore R. Sumers, Ishita Dasgupta, Thomas L. Griffiths

Viaarxiv icon

Preference-Conditioned Language-Guided Abstraction

Add code
Bookmark button
Alert button
Feb 05, 2024
Andi Peng, Andreea Bobu, Belinda Z. Li, Theodore R. Sumers, Ilia Sucholutsky, Nishanth Kumar, Thomas L. Griffiths, Julie A. Shah

Viaarxiv icon

Deep de Finetti: Recovering Topic Distributions from Large Language Models

Add code
Bookmark button
Alert button
Dec 21, 2023
Liyi Zhang, R. Thomas McCoy, Theodore R. Sumers, Jian-Qiao Zhu, Thomas L. Griffiths

Viaarxiv icon

Words are all you need? Capturing human sensory similarity with textual descriptors

Add code
Bookmark button
Alert button
Jun 15, 2022
Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Theodore R. Sumers, Harin Lee, Thomas L. Griffiths, Nori Jacoby

Figure 1 for Words are all you need? Capturing human sensory similarity with textual descriptors
Figure 2 for Words are all you need? Capturing human sensory similarity with textual descriptors
Figure 3 for Words are all you need? Capturing human sensory similarity with textual descriptors
Figure 4 for Words are all you need? Capturing human sensory similarity with textual descriptors
Viaarxiv icon

Linguistic communication as (inverse) reward design

Add code
Bookmark button
Alert button
Apr 11, 2022
Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Thomas L. Griffiths, Dylan Hadfield-Menell

Figure 1 for Linguistic communication as (inverse) reward design
Figure 2 for Linguistic communication as (inverse) reward design
Figure 3 for Linguistic communication as (inverse) reward design
Viaarxiv icon

Predicting Human Similarity Judgments Using Large Language Models

Add code
Bookmark button
Alert button
Feb 09, 2022
Raja Marjieh, Ilia Sucholutsky, Theodore R. Sumers, Nori Jacoby, Thomas L. Griffiths

Figure 1 for Predicting Human Similarity Judgments Using Large Language Models
Figure 2 for Predicting Human Similarity Judgments Using Large Language Models
Figure 3 for Predicting Human Similarity Judgments Using Large Language Models
Figure 4 for Predicting Human Similarity Judgments Using Large Language Models
Viaarxiv icon

Extending rational models of communication from beliefs to actions

Add code
Bookmark button
Alert button
May 25, 2021
Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Thomas L. Griffiths

Figure 1 for Extending rational models of communication from beliefs to actions
Figure 2 for Extending rational models of communication from beliefs to actions
Figure 3 for Extending rational models of communication from beliefs to actions
Figure 4 for Extending rational models of communication from beliefs to actions
Viaarxiv icon

Show or Tell? Demonstration is More Robust to Changes in Shared Perception than Explanation

Add code
Bookmark button
Alert button
Dec 16, 2020
Theodore R. Sumers, Mark K. Ho, Thomas L. Griffiths

Figure 1 for Show or Tell? Demonstration is More Robust to Changes in Shared Perception than Explanation
Figure 2 for Show or Tell? Demonstration is More Robust to Changes in Shared Perception than Explanation
Figure 3 for Show or Tell? Demonstration is More Robust to Changes in Shared Perception than Explanation
Figure 4 for Show or Tell? Demonstration is More Robust to Changes in Shared Perception than Explanation
Viaarxiv icon

Learning Rewards from Linguistic Feedback

Add code
Bookmark button
Alert button
Sep 30, 2020
Theodore R. Sumers, Mark K. Ho, Robert D. Hawkins, Karthik Narasimhan, Thomas L. Griffiths

Figure 1 for Learning Rewards from Linguistic Feedback
Figure 2 for Learning Rewards from Linguistic Feedback
Figure 3 for Learning Rewards from Linguistic Feedback
Figure 4 for Learning Rewards from Linguistic Feedback
Viaarxiv icon