Alert button
Picture for Katja Filippova

Katja Filippova

Alert button

Google Research

Theoretical and Practical Perspectives on what Influence Functions Do

Add code
Bookmark button
Alert button
May 26, 2023
Andrea Schioppa, Katja Filippova, Ivan Titov, Polina Zablotskaia

Figure 1 for Theoretical and Practical Perspectives on what Influence Functions Do
Figure 2 for Theoretical and Practical Perspectives on what Influence Functions Do
Figure 3 for Theoretical and Practical Perspectives on what Influence Functions Do
Figure 4 for Theoretical and Practical Perspectives on what Influence Functions Do
Viaarxiv icon

Dissecting Recall of Factual Associations in Auto-Regressive Language Models

Add code
Bookmark button
Alert button
Apr 28, 2023
Mor Geva, Jasmijn Bastings, Katja Filippova, Amir Globerson

Figure 1 for Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Figure 2 for Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Figure 3 for Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Figure 4 for Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Viaarxiv icon

Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets

Add code
Bookmark button
Alert button
Feb 27, 2023
Irina Bejan, Artem Sokolov, Katja Filippova

Figure 1 for Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets
Figure 2 for Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets
Figure 3 for Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets
Figure 4 for Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets
Viaarxiv icon

Understanding Text Classification Data and Models Using Aggregated Input Salience

Add code
Bookmark button
Alert button
Nov 11, 2022
Sebastian Ebert, Alice Shoshana Jakobovits, Katja Filippova

Figure 1 for Understanding Text Classification Data and Models Using Aggregated Input Salience
Figure 2 for Understanding Text Classification Data and Models Using Aggregated Input Salience
Figure 3 for Understanding Text Classification Data and Models Using Aggregated Input Salience
Figure 4 for Understanding Text Classification Data and Models Using Aggregated Input Salience
Viaarxiv icon

Diagnosing AI Explanation Methods with Folk Concepts of Behavior

Add code
Bookmark button
Alert button
Jan 27, 2022
Alon Jacovi, Jasmijn Bastings, Sebastian Gehrmann, Yoav Goldberg, Katja Filippova

Figure 1 for Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Figure 2 for Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Figure 3 for Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Figure 4 for Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Viaarxiv icon

"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification

Add code
Bookmark button
Alert button
Nov 14, 2021
Jasmijn Bastings, Sebastian Ebert, Polina Zablotskaia, Anders Sandholm, Katja Filippova

Figure 1 for "Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Figure 2 for "Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Figure 3 for "Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Figure 4 for "Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Viaarxiv icon

Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data

Add code
Bookmark button
Alert button
Oct 12, 2020
Katja Filippova

Figure 1 for Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data
Figure 2 for Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data
Figure 3 for Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data
Figure 4 for Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data
Viaarxiv icon

The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?

Add code
Bookmark button
Alert button
Oct 12, 2020
Jasmijn Bastings, Katja Filippova

Figure 1 for The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Viaarxiv icon

We Need to Talk About Random Splits

Add code
Bookmark button
Alert button
May 01, 2020
Anders Søgaard, Sebastian Ebert, Joost Bastings, Katja Filippova

Figure 1 for We Need to Talk About Random Splits
Figure 2 for We Need to Talk About Random Splits
Figure 3 for We Need to Talk About Random Splits
Figure 4 for We Need to Talk About Random Splits
Viaarxiv icon