Picture for Andreas Veit

Andreas Veit

Efficient Document Ranking with Learnable Late Interactions

Add code
Jun 25, 2024
Viaarxiv icon

SPEGTI: Structured Prediction for Efficient Generative Text-to-Image Models

Add code
Aug 14, 2023
Figure 1 for SPEGTI: Structured Prediction for Efficient Generative Text-to-Image Models
Figure 2 for SPEGTI: Structured Prediction for Efficient Generative Text-to-Image Models
Figure 3 for SPEGTI: Structured Prediction for Efficient Generative Text-to-Image Models
Figure 4 for SPEGTI: Structured Prediction for Efficient Generative Text-to-Image Models
Viaarxiv icon

Large Language Models with Controllable Working Memory

Add code
Nov 09, 2022
Figure 1 for Large Language Models with Controllable Working Memory
Figure 2 for Large Language Models with Controllable Working Memory
Figure 3 for Large Language Models with Controllable Working Memory
Figure 4 for Large Language Models with Controllable Working Memory
Viaarxiv icon

When does mixup promote local linearity in learned representations?

Add code
Oct 28, 2022
Figure 1 for When does mixup promote local linearity in learned representations?
Figure 2 for When does mixup promote local linearity in learned representations?
Figure 3 for When does mixup promote local linearity in learned representations?
Figure 4 for When does mixup promote local linearity in learned representations?
Viaarxiv icon

Teacher Guided Training: An Efficient Framework for Knowledge Transfer

Add code
Aug 14, 2022
Figure 1 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Figure 2 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Figure 3 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Figure 4 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Viaarxiv icon

Leveraging redundancy in attention with Reuse Transformers

Add code
Oct 13, 2021
Figure 1 for Leveraging redundancy in attention with Reuse Transformers
Figure 2 for Leveraging redundancy in attention with Reuse Transformers
Figure 3 for Leveraging redundancy in attention with Reuse Transformers
Figure 4 for Leveraging redundancy in attention with Reuse Transformers
Viaarxiv icon

Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation

Add code
Jun 16, 2021
Figure 1 for Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation
Figure 2 for Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation
Figure 3 for Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation
Figure 4 for Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation
Viaarxiv icon

Understanding Robustness of Transformers for Image Classification

Add code
Mar 26, 2021
Figure 1 for Understanding Robustness of Transformers for Image Classification
Figure 2 for Understanding Robustness of Transformers for Image Classification
Figure 3 for Understanding Robustness of Transformers for Image Classification
Figure 4 for Understanding Robustness of Transformers for Image Classification
Viaarxiv icon

On the Reproducibility of Neural Network Predictions

Add code
Feb 05, 2021
Figure 1 for On the Reproducibility of Neural Network Predictions
Figure 2 for On the Reproducibility of Neural Network Predictions
Figure 3 for On the Reproducibility of Neural Network Predictions
Figure 4 for On the Reproducibility of Neural Network Predictions
Viaarxiv icon

Improving Calibration in Deep Metric Learning With Cross-Example Softmax

Add code
Nov 17, 2020
Figure 1 for Improving Calibration in Deep Metric Learning With Cross-Example Softmax
Figure 2 for Improving Calibration in Deep Metric Learning With Cross-Example Softmax
Figure 3 for Improving Calibration in Deep Metric Learning With Cross-Example Softmax
Figure 4 for Improving Calibration in Deep Metric Learning With Cross-Example Softmax
Viaarxiv icon