Alert button
Picture for R. Thomas McCoy

R. Thomas McCoy

Alert button

Distilling Symbolic Priors for Concept Learning into Neural Networks

Add code
Bookmark button
Alert button
Feb 10, 2024
Ioana Marinescu, R. Thomas McCoy, Thomas L. Griffiths

Viaarxiv icon

Deep de Finetti: Recovering Topic Distributions from Large Language Models

Add code
Bookmark button
Alert button
Dec 21, 2023
Liyi Zhang, R. Thomas McCoy, Theodore R. Sumers, Jian-Qiao Zhu, Thomas L. Griffiths

Viaarxiv icon

Bayes in the age of intelligent machines

Add code
Bookmark button
Alert button
Nov 16, 2023
Thomas L. Griffiths, Jian-Qiao Zhu, Erin Grant, R. Thomas McCoy

Viaarxiv icon

Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve

Add code
Bookmark button
Alert button
Sep 24, 2023
R. Thomas McCoy, Shunyu Yao, Dan Friedman, Matthew Hardy, Thomas L. Griffiths

Viaarxiv icon

Modeling rapid language learning by distilling Bayesian priors into artificial neural networks

Add code
Bookmark button
Alert button
May 24, 2023
R. Thomas McCoy, Thomas L. Griffiths

Figure 1 for Modeling rapid language learning by distilling Bayesian priors into artificial neural networks
Figure 2 for Modeling rapid language learning by distilling Bayesian priors into artificial neural networks
Figure 3 for Modeling rapid language learning by distilling Bayesian priors into artificial neural networks
Figure 4 for Modeling rapid language learning by distilling Bayesian priors into artificial neural networks
Viaarxiv icon

How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech

Add code
Bookmark button
Alert button
Jan 26, 2023
Aditya Yedetore, Tal Linzen, Robert Frank, R. Thomas McCoy

Figure 1 for How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
Figure 2 for How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
Figure 3 for How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
Figure 4 for How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
Viaarxiv icon

Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages

Add code
Bookmark button
Alert button
Aug 11, 2022
Paul Soulos, Sudha Rao, Caitlin Smith, Eric Rosen, Asli Celikyilmaz, R. Thomas McCoy, Yichen Jiang, Coleman Haley, Roland Fernandez, Hamid Palangi, Jianfeng Gao, Paul Smolensky

Figure 1 for Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Figure 2 for Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Figure 3 for Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Figure 4 for Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Viaarxiv icon

Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems

Add code
Bookmark button
Alert button
May 02, 2022
Paul Smolensky, R. Thomas McCoy, Roland Fernandez, Matthew Goldrick, Jianfeng Gao

Figure 1 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Figure 2 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Figure 3 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Figure 4 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Viaarxiv icon

How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN

Add code
Bookmark button
Alert button
Nov 18, 2021
R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, Asli Celikyilmaz

Figure 1 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Figure 2 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Figure 3 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Figure 4 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Viaarxiv icon

Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis

Add code
Bookmark button
Alert button
Nov 24, 2020
Michael A. Lepori, R. Thomas McCoy

Figure 1 for Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis
Figure 2 for Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis
Figure 3 for Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis
Figure 4 for Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis
Viaarxiv icon