Alert button
Picture for Andrew K. Lampinen

Andrew K. Lampinen

Alert button

SODA: Bottleneck Diffusion Models for Representation Learning

Add code
Bookmark button
Alert button
Nov 29, 2023
Drew A. Hudson, Daniel Zoran, Mateusz Malinowski, Andrew K. Lampinen, Andrew Jaegle, James L. McClelland, Loic Matthey, Felix Hill, Alexander Lerchner

Viaarxiv icon

Getting aligned on representational alignment

Add code
Bookmark button
Alert button
Nov 02, 2023
Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C. Love, Erin Grant, Iris Groen, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins, Katherine L. Hermann, Kerem Oktar, Klaus Greff, Martin N. Hebart, Nori Jacoby, Qiuyi Zhang, Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Konkle, Thomas P. O'Connell, Thomas Unterthiner, Andrew K. Lampinen, Klaus-Robert Müller, Mariya Toneva, Thomas L. Griffiths

Figure 1 for Getting aligned on representational alignment
Figure 2 for Getting aligned on representational alignment
Figure 3 for Getting aligned on representational alignment
Figure 4 for Getting aligned on representational alignment
Viaarxiv icon

Evaluating Spatial Understanding of Large Language Models

Add code
Bookmark button
Alert button
Oct 23, 2023
Yutaro Yamada, Yihan Bao, Andrew K. Lampinen, Jungo Kasai, Ilker Yildirim

Viaarxiv icon

Improving neural network representations using human similarity judgments

Add code
Bookmark button
Alert button
Jun 07, 2023
Lukas Muttenthaler, Lorenz Linhardt, Jonas Dippel, Robert A. Vandermeulen, Katherine Hermann, Andrew K. Lampinen, Simon Kornblith

Figure 1 for Improving neural network representations using human similarity judgments
Figure 2 for Improving neural network representations using human similarity judgments
Figure 3 for Improving neural network representations using human similarity judgments
Figure 4 for Improving neural network representations using human similarity judgments
Viaarxiv icon

Transformers generalize differently from information stored in context vs in weights

Add code
Bookmark button
Alert button
Oct 11, 2022
Stephanie C. Y. Chan, Ishita Dasgupta, Junkyung Kim, Dharshan Kumaran, Andrew K. Lampinen, Felix Hill

Figure 1 for Transformers generalize differently from information stored in context vs in weights
Figure 2 for Transformers generalize differently from information stored in context vs in weights
Figure 3 for Transformers generalize differently from information stored in context vs in weights
Figure 4 for Transformers generalize differently from information stored in context vs in weights
Viaarxiv icon

Language models show human-like content effects on reasoning

Add code
Bookmark button
Alert button
Jul 14, 2022
Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Antonia Creswell, Dharshan Kumaran, James L. McClelland, Felix Hill

Figure 1 for Language models show human-like content effects on reasoning
Figure 2 for Language models show human-like content effects on reasoning
Figure 3 for Language models show human-like content effects on reasoning
Figure 4 for Language models show human-like content effects on reasoning
Viaarxiv icon

Know your audience: specializing grounded language models with the game of Dixit

Add code
Bookmark button
Alert button
Jun 16, 2022
Aaditya K. Singh, David Ding, Andrew Saxe, Felix Hill, Andrew K. Lampinen

Figure 1 for Know your audience: specializing grounded language models with the game of Dixit
Figure 2 for Know your audience: specializing grounded language models with the game of Dixit
Figure 3 for Know your audience: specializing grounded language models with the game of Dixit
Figure 4 for Know your audience: specializing grounded language models with the game of Dixit
Viaarxiv icon

Semantic Exploration from Language Abstractions and Pretrained Representations

Add code
Bookmark button
Alert button
Apr 08, 2022
Allison C. Tam, Neil C. Rabinowitz, Andrew K. Lampinen, Nicholas A. Roy, Stephanie C. Y. Chan, DJ Strouse, Jane X. Wang, Andrea Banino, Felix Hill

Figure 1 for Semantic Exploration from Language Abstractions and Pretrained Representations
Figure 2 for Semantic Exploration from Language Abstractions and Pretrained Representations
Figure 3 for Semantic Exploration from Language Abstractions and Pretrained Representations
Figure 4 for Semantic Exploration from Language Abstractions and Pretrained Representations
Viaarxiv icon

Can language models learn from explanations in context?

Add code
Bookmark button
Alert button
Apr 05, 2022
Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill

Figure 1 for Can language models learn from explanations in context?
Figure 2 for Can language models learn from explanations in context?
Figure 3 for Can language models learn from explanations in context?
Figure 4 for Can language models learn from explanations in context?
Viaarxiv icon