Alert button
Picture for Aditya Krishna Menon

Aditya Krishna Menon

Alert button

Supervision Complexity and its Role in Knowledge Distillation

Add code
Bookmark button
Alert button
Jan 28, 2023
Hrayr Harutyunyan, Ankit Singh Rawat, Aditya Krishna Menon, Seungyeon Kim, Sanjiv Kumar

Figure 1 for Supervision Complexity and its Role in Knowledge Distillation
Figure 2 for Supervision Complexity and its Role in Knowledge Distillation
Figure 3 for Supervision Complexity and its Role in Knowledge Distillation
Figure 4 for Supervision Complexity and its Role in Knowledge Distillation
Viaarxiv icon

EmbedDistill: A Geometric Knowledge Distillation for Information Retrieval

Add code
Bookmark button
Alert button
Jan 27, 2023
Seungyeon Kim, Ankit Singh Rawat, Manzil Zaheer, Sadeep Jayasumana, Veeranjaneyulu Sadhanala, Wittawat Jitkrittum, Aditya Krishna Menon, Rob Fergus, Sanjiv Kumar

Figure 1 for EmbedDistill: A Geometric Knowledge Distillation for Information Retrieval
Figure 2 for EmbedDistill: A Geometric Knowledge Distillation for Information Retrieval
Figure 3 for EmbedDistill: A Geometric Knowledge Distillation for Information Retrieval
Figure 4 for EmbedDistill: A Geometric Knowledge Distillation for Information Retrieval
Viaarxiv icon

When does mixup promote local linearity in learned representations?

Add code
Bookmark button
Alert button
Oct 28, 2022
Arslan Chaudhry, Aditya Krishna Menon, Andreas Veit, Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar

Figure 1 for When does mixup promote local linearity in learned representations?
Figure 2 for When does mixup promote local linearity in learned representations?
Figure 3 for When does mixup promote local linearity in learned representations?
Figure 4 for When does mixup promote local linearity in learned representations?
Viaarxiv icon

Robust Distillation for Worst-class Performance

Add code
Bookmark button
Alert button
Jun 13, 2022
Serena Wang, Harikrishna Narasimhan, Yichen Zhou, Sara Hooker, Michal Lukasik, Aditya Krishna Menon

Figure 1 for Robust Distillation for Worst-class Performance
Figure 2 for Robust Distillation for Worst-class Performance
Figure 3 for Robust Distillation for Worst-class Performance
Figure 4 for Robust Distillation for Worst-class Performance
Viaarxiv icon

ELM: Embedding and Logit Margins for Long-Tail Learning

Add code
Bookmark button
Alert button
Apr 27, 2022
Wittawat Jitkrittum, Aditya Krishna Menon, Ankit Singh Rawat, Sanjiv Kumar

Figure 1 for ELM: Embedding and Logit Margins for Long-Tail Learning
Figure 2 for ELM: Embedding and Logit Margins for Long-Tail Learning
Figure 3 for ELM: Embedding and Logit Margins for Long-Tail Learning
Figure 4 for ELM: Embedding and Logit Margins for Long-Tail Learning
Viaarxiv icon

When in Doubt, Summon the Titans: Efficient Inference with Large Models

Add code
Bookmark button
Alert button
Oct 19, 2021
Ankit Singh Rawat, Manzil Zaheer, Aditya Krishna Menon, Amr Ahmed, Sanjiv Kumar

Figure 1 for When in Doubt, Summon the Titans: Efficient Inference with Large Models
Figure 2 for When in Doubt, Summon the Titans: Efficient Inference with Large Models
Figure 3 for When in Doubt, Summon the Titans: Efficient Inference with Large Models
Figure 4 for When in Doubt, Summon the Titans: Efficient Inference with Large Models
Viaarxiv icon

Training Over-parameterized Models with Non-decomposable Objectives

Add code
Bookmark button
Alert button
Jul 09, 2021
Harikrishna Narasimhan, Aditya Krishna Menon

Figure 1 for Training Over-parameterized Models with Non-decomposable Objectives
Figure 2 for Training Over-parameterized Models with Non-decomposable Objectives
Figure 3 for Training Over-parameterized Models with Non-decomposable Objectives
Figure 4 for Training Over-parameterized Models with Non-decomposable Objectives
Viaarxiv icon

Teacher's pet: understanding and mitigating biases in distillation

Add code
Bookmark button
Alert button
Jul 08, 2021
Michal Lukasik, Srinadh Bhojanapalli, Aditya Krishna Menon, Sanjiv Kumar

Figure 1 for Teacher's pet: understanding and mitigating biases in distillation
Figure 2 for Teacher's pet: understanding and mitigating biases in distillation
Figure 3 for Teacher's pet: understanding and mitigating biases in distillation
Figure 4 for Teacher's pet: understanding and mitigating biases in distillation
Viaarxiv icon

Disentangling Sampling and Labeling Bias for Learning in Large-Output Spaces

Add code
Bookmark button
Alert button
May 12, 2021
Ankit Singh Rawat, Aditya Krishna Menon, Wittawat Jitkrittum, Sadeep Jayasumana, Felix X. Yu, Sashank Reddi, Sanjiv Kumar

Figure 1 for Disentangling Sampling and Labeling Bias for Learning in Large-Output Spaces
Figure 2 for Disentangling Sampling and Labeling Bias for Learning in Large-Output Spaces
Figure 3 for Disentangling Sampling and Labeling Bias for Learning in Large-Output Spaces
Figure 4 for Disentangling Sampling and Labeling Bias for Learning in Large-Output Spaces
Viaarxiv icon