Picture for Matthew Lyle Olson

Matthew Lyle Olson

Analyzing Hierarchical Structure in Vision Models with Sparse Autoencoders

Add code
May 21, 2025
Viaarxiv icon

Steering Large Language Models to Evaluate and Amplify Creativity

Add code
Dec 08, 2024
Viaarxiv icon

Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering

Add code
Nov 15, 2024
Figure 1 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Figure 2 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Figure 3 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Figure 4 for Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering
Viaarxiv icon

Super-Resolution without High-Resolution Labels for Black Hole Simulations

Add code
Nov 03, 2024
Viaarxiv icon

Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations

Add code
Oct 17, 2024
Figure 1 for Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Figure 2 for Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Figure 3 for Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Figure 4 for Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Viaarxiv icon

ClimDetect: A Benchmark Dataset for Climate Change Detection and Attribution

Add code
Aug 28, 2024
Figure 1 for ClimDetect: A Benchmark Dataset for Climate Change Detection and Attribution
Figure 2 for ClimDetect: A Benchmark Dataset for Climate Change Detection and Attribution
Figure 3 for ClimDetect: A Benchmark Dataset for Climate Change Detection and Attribution
Figure 4 for ClimDetect: A Benchmark Dataset for Climate Change Detection and Attribution
Viaarxiv icon

Why do LLaVA Vision-Language Models Reply to Images in English?

Add code
Jul 02, 2024
Figure 1 for Why do LLaVA Vision-Language Models Reply to Images in English?
Figure 2 for Why do LLaVA Vision-Language Models Reply to Images in English?
Figure 3 for Why do LLaVA Vision-Language Models Reply to Images in English?
Figure 4 for Why do LLaVA Vision-Language Models Reply to Images in English?
Viaarxiv icon

LVLM-Intrepret: An Interpretability Tool for Large Vision-Language Models

Add code
Apr 03, 2024
Figure 1 for LVLM-Intrepret: An Interpretability Tool for Large Vision-Language Models
Figure 2 for LVLM-Intrepret: An Interpretability Tool for Large Vision-Language Models
Figure 3 for LVLM-Intrepret: An Interpretability Tool for Large Vision-Language Models
Figure 4 for LVLM-Intrepret: An Interpretability Tool for Large Vision-Language Models
Viaarxiv icon