Alert button
Picture for Scott Lundberg

Scott Lundberg

Alert button

Sparks of Artificial General Intelligence: Early experiments with GPT-4

Add code
Bookmark button
Alert button
Mar 27, 2023
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang

Figure 1 for Sparks of Artificial General Intelligence: Early experiments with GPT-4
Figure 2 for Sparks of Artificial General Intelligence: Early experiments with GPT-4
Figure 3 for Sparks of Artificial General Intelligence: Early experiments with GPT-4
Figure 4 for Sparks of Artificial General Intelligence: Early experiments with GPT-4
Viaarxiv icon

ART: Automatic multi-step reasoning and tool-use for large language models

Add code
Bookmark button
Alert button
Mar 16, 2023
Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro

Figure 1 for ART: Automatic multi-step reasoning and tool-use for large language models
Figure 2 for ART: Automatic multi-step reasoning and tool-use for large language models
Figure 3 for ART: Automatic multi-step reasoning and tool-use for large language models
Figure 4 for ART: Automatic multi-step reasoning and tool-use for large language models
Viaarxiv icon

Adaptive Testing of Computer Vision Models

Add code
Bookmark button
Alert button
Dec 06, 2022
Irena Gao, Gabriel Ilharco, Scott Lundberg, Marco Tulio Ribeiro

Figure 1 for Adaptive Testing of Computer Vision Models
Figure 2 for Adaptive Testing of Computer Vision Models
Figure 3 for Adaptive Testing of Computer Vision Models
Figure 4 for Adaptive Testing of Computer Vision Models
Viaarxiv icon

Fixing Model Bugs with Natural Language Patches

Add code
Bookmark button
Alert button
Nov 20, 2022
Shikhar Murty, Christopher D. Manning, Scott Lundberg, Marco Tulio Ribeiro

Figure 1 for Fixing Model Bugs with Natural Language Patches
Figure 2 for Fixing Model Bugs with Natural Language Patches
Figure 3 for Fixing Model Bugs with Natural Language Patches
Figure 4 for Fixing Model Bugs with Natural Language Patches
Viaarxiv icon

Model-Agnostic Explainability for Visual Search

Add code
Bookmark button
Alert button
Feb 28, 2021
Mark Hamilton, Scott Lundberg, Lei Zhang, Stephanie Fu, William T. Freeman

Figure 1 for Model-Agnostic Explainability for Visual Search
Figure 2 for Model-Agnostic Explainability for Visual Search
Figure 3 for Model-Agnostic Explainability for Visual Search
Figure 4 for Model-Agnostic Explainability for Visual Search
Viaarxiv icon

Explaining by Removing: A Unified Framework for Model Explanation

Add code
Bookmark button
Alert button
Nov 21, 2020
Ian Covert, Scott Lundberg, Su-In Lee

Figure 1 for Explaining by Removing: A Unified Framework for Model Explanation
Figure 2 for Explaining by Removing: A Unified Framework for Model Explanation
Figure 3 for Explaining by Removing: A Unified Framework for Model Explanation
Figure 4 for Explaining by Removing: A Unified Framework for Model Explanation
Viaarxiv icon

Shapley Flow: A Graph-based Approach to Interpreting Model Predictions

Add code
Bookmark button
Alert button
Nov 13, 2020
Jiaxuan Wang, Jenna Wiens, Scott Lundberg

Figure 1 for Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Figure 2 for Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Figure 3 for Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Figure 4 for Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Viaarxiv icon

Feature Removal Is a Unifying Principle for Model Explanation Methods

Add code
Bookmark button
Alert button
Nov 06, 2020
Ian Covert, Scott Lundberg, Su-In Lee

Figure 1 for Feature Removal Is a Unifying Principle for Model Explanation Methods
Figure 2 for Feature Removal Is a Unifying Principle for Model Explanation Methods
Figure 3 for Feature Removal Is a Unifying Principle for Model Explanation Methods
Figure 4 for Feature Removal Is a Unifying Principle for Model Explanation Methods
Viaarxiv icon