Alert button
Picture for Nicholas Asher

Nicholas Asher

Alert button

IRIT-MELODI, CNRS

Modality-Agnostic fMRI Decoding of Vision and Language

Add code
Bookmark button
Alert button
Mar 18, 2024
Mitja Nikolaus, Milad Mozafari, Nicholas Asher, Leila Reddy, Rufin VanRullen

Figure 1 for Modality-Agnostic fMRI Decoding of Vision and Language
Figure 2 for Modality-Agnostic fMRI Decoding of Vision and Language
Figure 3 for Modality-Agnostic fMRI Decoding of Vision and Language
Figure 4 for Modality-Agnostic fMRI Decoding of Vision and Language
Viaarxiv icon

Strong hallucinations from negation and how to fix them

Add code
Bookmark button
Alert button
Feb 16, 2024
Nicholas Asher, Swarnadeep Bhar

Viaarxiv icon

TaCo: Targeted Concept Removal in Output Embeddings for NLP via Information Theory and Explainability

Add code
Bookmark button
Alert button
Dec 11, 2023
Fanny Jourdan, Louis Béthune, Agustin Picard, Laurent Risser, Nicholas Asher

Viaarxiv icon

Limits for Learning with Language Models

Add code
Bookmark button
Alert button
Jun 21, 2023
Nicholas Asher, Swarnadeep Bhar, Akshay Chaturvedi, Julie Hunter, Soumya Paul

Figure 1 for Limits for Learning with Language Models
Viaarxiv icon

Are fairness metric scores enough to assess discrimination biases in machine learning?

Add code
Bookmark button
Alert button
Jun 08, 2023
Fanny Jourdan, Laurent Risser, Jean-Michel Loubes, Nicholas Asher

Figure 1 for Are fairness metric scores enough to assess discrimination biases in machine learning?
Figure 2 for Are fairness metric scores enough to assess discrimination biases in machine learning?
Figure 3 for Are fairness metric scores enough to assess discrimination biases in machine learning?
Figure 4 for Are fairness metric scores enough to assess discrimination biases in machine learning?
Viaarxiv icon

COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks

Add code
Bookmark button
Alert button
May 14, 2023
Fanny Jourdan, Agustin Picard, Thomas Fel, Laurent Risser, Jean Michel Loubes, Nicholas Asher

Figure 1 for COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Figure 2 for COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Figure 3 for COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Figure 4 for COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Viaarxiv icon

How optimal transport can tackle gender biases in multi-class neural-network classifiers for job recommendations?

Add code
Bookmark button
Alert button
Feb 27, 2023
Fanny Jourdan, Titon Tshiongo Kaninku, Nicholas Asher, Jean-Michel Loubes, Laurent Risser

Figure 1 for How optimal transport can tackle gender biases in multi-class neural-network classifiers for job recommendations?
Figure 2 for How optimal transport can tackle gender biases in multi-class neural-network classifiers for job recommendations?
Figure 3 for How optimal transport can tackle gender biases in multi-class neural-network classifiers for job recommendations?
Figure 4 for How optimal transport can tackle gender biases in multi-class neural-network classifiers for job recommendations?
Viaarxiv icon

Analyzing Semantic Faithfulness of Language Models via Input Intervention on Conversational Question Answering

Add code
Bookmark button
Alert button
Dec 21, 2022
Akshay Chaturvedi, Swarnadeep Bhar, Soumadeep Saha, Utpal Garain, Nicholas Asher

Figure 1 for Analyzing Semantic Faithfulness of Language Models via Input Intervention on Conversational Question Answering
Figure 2 for Analyzing Semantic Faithfulness of Language Models via Input Intervention on Conversational Question Answering
Figure 3 for Analyzing Semantic Faithfulness of Language Models via Input Intervention on Conversational Question Answering
Figure 4 for Analyzing Semantic Faithfulness of Language Models via Input Intervention on Conversational Question Answering
Viaarxiv icon

Interpretive Blindness

Add code
Bookmark button
Alert button
Oct 19, 2021
Nicholas Asher, Julie Hunter

Viaarxiv icon

Transport-based Counterfactual Models

Add code
Bookmark button
Alert button
Aug 30, 2021
Lucas de Lara, Alberto González-Sanz, Nicholas Asher, Jean-Michel Loubes

Figure 1 for Transport-based Counterfactual Models
Viaarxiv icon