Alert button
Picture for Michael Y. Hu

Michael Y. Hu

Alert button

[Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

Add code
Bookmark button
Alert button
Apr 09, 2024
Leshem Choshen, Ryan Cotterell, Michael Y. Hu, Tal Linzen, Aaron Mueller, Candace Ross, Alex Warstadt, Ethan Wilcox, Adina Williams, Chengxu Zhuang

Viaarxiv icon

Comparing Abstraction in Humans and Large Language Models Using Multimodal Serial Reproduction

Add code
Bookmark button
Alert button
Feb 06, 2024
Sreejan Kumar, Raja Marjieh, Byron Zhang, Declan Campbell, Michael Y. Hu, Umang Bhatt, Brenden Lake, Thomas L. Griffiths

Viaarxiv icon

Latent State Models of Training Dynamics

Add code
Bookmark button
Alert button
Aug 18, 2023
Michael Y. Hu, Angelica Chen, Naomi Saphra, Kyunghyun Cho

Figure 1 for Latent State Models of Training Dynamics
Figure 2 for Latent State Models of Training Dynamics
Figure 3 for Latent State Models of Training Dynamics
Figure 4 for Latent State Models of Training Dynamics
Viaarxiv icon

Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines

Add code
Bookmark button
Alert button
May 23, 2022
Sreejan Kumar, Carlos G. Correa, Ishita Dasgupta, Raja Marjieh, Michael Y. Hu, Robert D. Hawkins, Nathaniel D. Daw, Jonathan D. Cohen, Karthik Narasimhan, Thomas L. Griffiths

Figure 1 for Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines
Figure 2 for Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines
Figure 3 for Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines
Figure 4 for Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines
Viaarxiv icon