Picture for Thomas Unterthiner

Thomas Unterthiner

Aligning Machine and Human Visual Representations across Abstraction Levels

Add code
Sep 10, 2024
Viaarxiv icon

PaliGemma: A versatile 3B VLM for transfer

Add code
Jul 10, 2024
Figure 1 for PaliGemma: A versatile 3B VLM for transfer
Figure 2 for PaliGemma: A versatile 3B VLM for transfer
Figure 3 for PaliGemma: A versatile 3B VLM for transfer
Figure 4 for PaliGemma: A versatile 3B VLM for transfer
Viaarxiv icon

Getting aligned on representational alignment

Add code
Nov 02, 2023
Figure 1 for Getting aligned on representational alignment
Figure 2 for Getting aligned on representational alignment
Figure 3 for Getting aligned on representational alignment
Figure 4 for Getting aligned on representational alignment
Viaarxiv icon

Set Learning for Accurate and Calibrated Models

Add code
Jul 10, 2023
Figure 1 for Set Learning for Accurate and Calibrated Models
Figure 2 for Set Learning for Accurate and Calibrated Models
Figure 3 for Set Learning for Accurate and Calibrated Models
Figure 4 for Set Learning for Accurate and Calibrated Models
Viaarxiv icon

Accurate Machine Learned Quantum-Mechanical Force Fields for Biomolecular Simulations

Add code
May 17, 2022
Figure 1 for Accurate Machine Learned Quantum-Mechanical Force Fields for Biomolecular Simulations
Figure 2 for Accurate Machine Learned Quantum-Mechanical Force Fields for Biomolecular Simulations
Figure 3 for Accurate Machine Learned Quantum-Mechanical Force Fields for Biomolecular Simulations
Figure 4 for Accurate Machine Learned Quantum-Mechanical Force Fields for Biomolecular Simulations
Viaarxiv icon

GradMax: Growing Neural Networks using Gradient Information

Add code
Jan 13, 2022
Figure 1 for GradMax: Growing Neural Networks using Gradient Information
Figure 2 for GradMax: Growing Neural Networks using Gradient Information
Figure 3 for GradMax: Growing Neural Networks using Gradient Information
Figure 4 for GradMax: Growing Neural Networks using Gradient Information
Viaarxiv icon

Do Vision Transformers See Like Convolutional Neural Networks?

Add code
Aug 19, 2021
Figure 1 for Do Vision Transformers See Like Convolutional Neural Networks?
Figure 2 for Do Vision Transformers See Like Convolutional Neural Networks?
Figure 3 for Do Vision Transformers See Like Convolutional Neural Networks?
Figure 4 for Do Vision Transformers See Like Convolutional Neural Networks?
Viaarxiv icon

MLP-Mixer: An all-MLP Architecture for Vision

Add code
May 17, 2021
Figure 1 for MLP-Mixer: An all-MLP Architecture for Vision
Figure 2 for MLP-Mixer: An all-MLP Architecture for Vision
Figure 3 for MLP-Mixer: An all-MLP Architecture for Vision
Figure 4 for MLP-Mixer: An all-MLP Architecture for Vision
Viaarxiv icon

Differentiable Patch Selection for Image Recognition

Add code
Apr 07, 2021
Figure 1 for Differentiable Patch Selection for Image Recognition
Figure 2 for Differentiable Patch Selection for Image Recognition
Figure 3 for Differentiable Patch Selection for Image Recognition
Figure 4 for Differentiable Patch Selection for Image Recognition
Viaarxiv icon

Understanding Robustness of Transformers for Image Classification

Add code
Mar 26, 2021
Figure 1 for Understanding Robustness of Transformers for Image Classification
Figure 2 for Understanding Robustness of Transformers for Image Classification
Figure 3 for Understanding Robustness of Transformers for Image Classification
Figure 4 for Understanding Robustness of Transformers for Image Classification
Viaarxiv icon