Alert button
Picture for Minyoung Huh

Minyoung Huh

Alert button

Training Neural Networks from Scratch with Parallel Low-Rank Adapters

Add code
Bookmark button
Alert button
Feb 26, 2024
Minyoung Huh, Brian Cheung, Jeremy Bernstein, Phillip Isola, Pulkit Agrawal

Viaarxiv icon

Straightening Out the Straight-Through Estimator: Overcoming Optimization Challenges in Vector Quantized Networks

Add code
Bookmark button
Alert button
May 15, 2023
Minyoung Huh, Brian Cheung, Pulkit Agrawal, Phillip Isola

Figure 1 for Straightening Out the Straight-Through Estimator: Overcoming Optimization Challenges in Vector Quantized Networks
Figure 2 for Straightening Out the Straight-Through Estimator: Overcoming Optimization Challenges in Vector Quantized Networks
Figure 3 for Straightening Out the Straight-Through Estimator: Overcoming Optimization Challenges in Vector Quantized Networks
Figure 4 for Straightening Out the Straight-Through Estimator: Overcoming Optimization Challenges in Vector Quantized Networks
Viaarxiv icon

Totems: Physical Objects for Verifying Visual Integrity

Add code
Bookmark button
Alert button
Sep 26, 2022
Jingwei Ma, Lucy Chai, Minyoung Huh, Tongzhou Wang, Ser-Nam Lim, Phillip Isola, Antonio Torralba

Figure 1 for Totems: Physical Objects for Verifying Visual Integrity
Figure 2 for Totems: Physical Objects for Verifying Visual Integrity
Figure 3 for Totems: Physical Objects for Verifying Visual Integrity
Figure 4 for Totems: Physical Objects for Verifying Visual Integrity
Viaarxiv icon

Learning to Ground Multi-Agent Communication with Autoencoders

Add code
Bookmark button
Alert button
Oct 28, 2021
Toru Lin, Minyoung Huh, Chris Stauffer, Ser-Nam Lim, Phillip Isola

Figure 1 for Learning to Ground Multi-Agent Communication with Autoencoders
Figure 2 for Learning to Ground Multi-Agent Communication with Autoencoders
Figure 3 for Learning to Ground Multi-Agent Communication with Autoencoders
Figure 4 for Learning to Ground Multi-Agent Communication with Autoencoders
Viaarxiv icon

The Low-Rank Simplicity Bias in Deep Networks

Add code
Bookmark button
Alert button
Mar 18, 2021
Minyoung Huh, Hossein Mobahi, Richard Zhang, Brian Cheung, Pulkit Agrawal, Phillip Isola

Figure 1 for The Low-Rank Simplicity Bias in Deep Networks
Figure 2 for The Low-Rank Simplicity Bias in Deep Networks
Figure 3 for The Low-Rank Simplicity Bias in Deep Networks
Figure 4 for The Low-Rank Simplicity Bias in Deep Networks
Viaarxiv icon

Transforming and Projecting Images into Class-conditional Generative Networks

Add code
Bookmark button
Alert button
May 04, 2020
Minyoung Huh, Richard Zhang, Jun-Yan Zhu, Sylvain Paris, Aaron Hertzmann

Figure 1 for Transforming and Projecting Images into Class-conditional Generative Networks
Figure 2 for Transforming and Projecting Images into Class-conditional Generative Networks
Figure 3 for Transforming and Projecting Images into Class-conditional Generative Networks
Figure 4 for Transforming and Projecting Images into Class-conditional Generative Networks
Viaarxiv icon

Fighting Fake News: Image Splice Detection via Learned Self-Consistency

Add code
Bookmark button
Alert button
Sep 05, 2018
Minyoung Huh, Andrew Liu, Andrew Owens, Alexei A. Efros

Figure 1 for Fighting Fake News: Image Splice Detection via Learned Self-Consistency
Figure 2 for Fighting Fake News: Image Splice Detection via Learned Self-Consistency
Figure 3 for Fighting Fake News: Image Splice Detection via Learned Self-Consistency
Figure 4 for Fighting Fake News: Image Splice Detection via Learned Self-Consistency
Viaarxiv icon

What makes ImageNet good for transfer learning?

Add code
Bookmark button
Alert button
Dec 10, 2016
Minyoung Huh, Pulkit Agrawal, Alexei A. Efros

Figure 1 for What makes ImageNet good for transfer learning?
Figure 2 for What makes ImageNet good for transfer learning?
Figure 3 for What makes ImageNet good for transfer learning?
Figure 4 for What makes ImageNet good for transfer learning?
Viaarxiv icon