Alert button
Picture for Dilip Krishnan

Dilip Krishnan

Alert button

Google Research

MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis

Add code
Bookmark button
Alert button
Nov 16, 2022
Tianhong Li, Huiwen Chang, Shlok Kumar Mishra, Han Zhang, Dina Katabi, Dilip Krishnan

Figure 1 for MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis
Figure 2 for MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis
Figure 3 for MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis
Figure 4 for MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis
Viaarxiv icon

A simple, efficient and scalable contrastive masked autoencoder for learning visual representations

Add code
Bookmark button
Alert button
Oct 30, 2022
Shlok Mishra, Joshua Robinson, Huiwen Chang, David Jacobs, Aaron Sarna, Aaron Maschinot, Dilip Krishnan

Figure 1 for A simple, efficient and scalable contrastive masked autoencoder for learning visual representations
Figure 2 for A simple, efficient and scalable contrastive masked autoencoder for learning visual representations
Figure 3 for A simple, efficient and scalable contrastive masked autoencoder for learning visual representations
Figure 4 for A simple, efficient and scalable contrastive masked autoencoder for learning visual representations
Viaarxiv icon

Object-Aware Cropping for Self-Supervised Learning

Add code
Bookmark button
Alert button
Dec 01, 2021
Shlok Mishra, Anshul Shah, Ankan Bansal, Abhyuday Jagannatha, Abhishek Sharma, David Jacobs, Dilip Krishnan

Figure 1 for Object-Aware Cropping for Self-Supervised Learning
Figure 2 for Object-Aware Cropping for Self-Supervised Learning
Figure 3 for Object-Aware Cropping for Self-Supervised Learning
Figure 4 for Object-Aware Cropping for Self-Supervised Learning
Viaarxiv icon

Pyramid Adversarial Training Improves ViT Performance

Add code
Bookmark button
Alert button
Nov 30, 2021
Charles Herrmann, Kyle Sargent, Lu Jiang, Ramin Zabih, Huiwen Chang, Ce Liu, Dilip Krishnan, Deqing Sun

Figure 1 for Pyramid Adversarial Training Improves ViT Performance
Figure 2 for Pyramid Adversarial Training Improves ViT Performance
Figure 3 for Pyramid Adversarial Training Improves ViT Performance
Figure 4 for Pyramid Adversarial Training Improves ViT Performance
Viaarxiv icon

Contrastive Multiview Coding for Enzyme-Substrate Interaction Prediction

Add code
Bookmark button
Alert button
Nov 18, 2021
Apurva Kalia, Dilip Krishnan, Soha Hassoun

Figure 1 for Contrastive Multiview Coding for Enzyme-Substrate Interaction Prediction
Viaarxiv icon

Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions

Add code
Bookmark button
Alert button
Aug 14, 2021
Andrea Burns, Aaron Sarna, Dilip Krishnan, Aaron Maschinot

Figure 1 for Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions
Figure 2 for Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions
Figure 3 for Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions
Figure 4 for Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions
Viaarxiv icon

Understanding invariance via feedforward inversion of discriminatively trained classifiers

Add code
Bookmark button
Alert button
Mar 15, 2021
Piotr Teterwak, Chiyuan Zhang, Dilip Krishnan, Michael C. Mozer

Figure 1 for Understanding invariance via feedforward inversion of discriminatively trained classifiers
Figure 2 for Understanding invariance via feedforward inversion of discriminatively trained classifiers
Figure 3 for Understanding invariance via feedforward inversion of discriminatively trained classifiers
Figure 4 for Understanding invariance via feedforward inversion of discriminatively trained classifiers
Viaarxiv icon

What makes for good views for contrastive learning

Add code
Bookmark button
Alert button
May 20, 2020
Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola

Figure 1 for What makes for good views for contrastive learning
Figure 2 for What makes for good views for contrastive learning
Figure 3 for What makes for good views for contrastive learning
Figure 4 for What makes for good views for contrastive learning
Viaarxiv icon

Supervised Contrastive Learning

Add code
Bookmark button
Alert button
Apr 23, 2020
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan

Figure 1 for Supervised Contrastive Learning
Figure 2 for Supervised Contrastive Learning
Figure 3 for Supervised Contrastive Learning
Figure 4 for Supervised Contrastive Learning
Viaarxiv icon

Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?

Add code
Bookmark button
Alert button
Mar 25, 2020
Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, Phillip Isola

Figure 1 for Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?
Figure 2 for Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?
Figure 3 for Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?
Figure 4 for Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?
Viaarxiv icon