Alert button
Picture for Felix Hill

Felix Hill

Alert button

SODA: Bottleneck Diffusion Models for Representation Learning

Nov 29, 2023
Drew A. Hudson, Daniel Zoran, Mateusz Malinowski, Andrew K. Lampinen, Andrew Jaegle, James L. McClelland, Loic Matthey, Felix Hill, Alexander Lerchner

Viaarxiv icon

The Transient Nature of Emergent In-Context Learning in Transformers

Nov 15, 2023
Aaditya K. Singh, Stephanie C. Y. Chan, Ted Moskovitz, Erin Grant, Andrew M. Saxe, Felix Hill

Viaarxiv icon

Vision-Language Models as Success Detectors

Mar 13, 2023
Yuqing Du, Ksenia Konyushkova, Misha Denil, Akhil Raju, Jessica Landon, Felix Hill, Nando de Freitas, Serkan Cabi

Figure 1 for Vision-Language Models as Success Detectors
Figure 2 for Vision-Language Models as Success Detectors
Figure 3 for Vision-Language Models as Success Detectors
Figure 4 for Vision-Language Models as Success Detectors
Viaarxiv icon

The Edge of Orthogonality: A Simple View of What Makes BYOL Tick

Feb 09, 2023
Pierre H. Richemond, Allison Tam, Yunhao Tang, Florian Strub, Bilal Piot, Felix Hill

Figure 1 for The Edge of Orthogonality: A Simple View of What Makes BYOL Tick
Figure 2 for The Edge of Orthogonality: A Simple View of What Makes BYOL Tick
Figure 3 for The Edge of Orthogonality: A Simple View of What Makes BYOL Tick
Figure 4 for The Edge of Orthogonality: A Simple View of What Makes BYOL Tick
Viaarxiv icon

Collaborating with language models for embodied reasoning

Feb 01, 2023
Ishita Dasgupta, Christine Kaeser-Chen, Kenneth Marino, Arun Ahuja, Sheila Babayan, Felix Hill, Rob Fergus

Figure 1 for Collaborating with language models for embodied reasoning
Figure 2 for Collaborating with language models for embodied reasoning
Figure 3 for Collaborating with language models for embodied reasoning
Viaarxiv icon

SemPPL: Predicting pseudo-labels for better contrastive representations

Jan 12, 2023
Matko Bošnjak, Pierre H. Richemond, Nenad Tomasev, Florian Strub, Jacob C. Walker, Felix Hill, Lars Holger Buesing, Razvan Pascanu, Charles Blundell, Jovana Mitrovic

Figure 1 for SemPPL: Predicting pseudo-labels for better contrastive representations
Figure 2 for SemPPL: Predicting pseudo-labels for better contrastive representations
Figure 3 for SemPPL: Predicting pseudo-labels for better contrastive representations
Figure 4 for SemPPL: Predicting pseudo-labels for better contrastive representations
Viaarxiv icon

Transformers generalize differently from information stored in context vs in weights

Oct 11, 2022
Stephanie C. Y. Chan, Ishita Dasgupta, Junkyung Kim, Dharshan Kumaran, Andrew K. Lampinen, Felix Hill

Figure 1 for Transformers generalize differently from information stored in context vs in weights
Figure 2 for Transformers generalize differently from information stored in context vs in weights
Figure 3 for Transformers generalize differently from information stored in context vs in weights
Figure 4 for Transformers generalize differently from information stored in context vs in weights
Viaarxiv icon

Meaning without reference in large language models

Aug 12, 2022
Steven T. Piantadosi, Felix Hill

Viaarxiv icon

Language models show human-like content effects on reasoning

Jul 14, 2022
Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Antonia Creswell, Dharshan Kumaran, James L. McClelland, Felix Hill

Figure 1 for Language models show human-like content effects on reasoning
Figure 2 for Language models show human-like content effects on reasoning
Figure 3 for Language models show human-like content effects on reasoning
Figure 4 for Language models show human-like content effects on reasoning
Viaarxiv icon