Alert button
Picture for Amir Feder

Amir Feder

Alert button

LLMs Accelerate Annotation for Medical Information Extraction

Add code
Bookmark button
Alert button
Dec 04, 2023
Akshay Goel, Almog Gueta, Omry Gilon, Chang Liu, Sofia Erell, Lan Huong Nguyen, Xiaohong Hao, Bolous Jaber, Shashir Reddy, Rupesh Kartha, Jean Steiner, Itay Laish, Amir Feder

Figure 1 for LLMs Accelerate Annotation for Medical Information Extraction
Figure 2 for LLMs Accelerate Annotation for Medical Information Extraction
Figure 3 for LLMs Accelerate Annotation for Medical Information Extraction
Figure 4 for LLMs Accelerate Annotation for Medical Information Extraction
Viaarxiv icon

Causal-structure Driven Augmentations for Text OOD Generalization

Add code
Bookmark button
Alert button
Oct 19, 2023
Amir Feder, Yoav Wald, Claudia Shi, Suchi Saria, David Blei

Viaarxiv icon

The Temporal Structure of Language Processing in the Human Brain Corresponds to The Layered Hierarchy of Deep Language Models

Add code
Bookmark button
Alert button
Oct 11, 2023
Ariel Goldstein, Eric Ham, Mariano Schain, Samuel Nastase, Zaid Zada, Avigail Dabush, Bobbi Aubrey, Harshvardhan Gazula, Amir Feder, Werner K Doyle, Sasha Devore, Patricia Dugan, Daniel Friedman, Roi Reichart, Michael Brenner, Avinatan Hassidim, Orrin Devinsky, Adeen Flinker, Omer Levy, Uri Hasson

Viaarxiv icon

Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals

Add code
Bookmark button
Alert button
Oct 01, 2023
Yair Gat, Nitay Calderon, Amir Feder, Alexander Chapanin, Amit Sharma, Roi Reichart

Figure 1 for Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Figure 2 for Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Figure 3 for Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Figure 4 for Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Viaarxiv icon

Evaluating the Moral Beliefs Encoded in LLMs

Add code
Bookmark button
Alert button
Jul 26, 2023
Nino Scherrer, Claudia Shi, Amir Feder, David M. Blei

Figure 1 for Evaluating the Moral Beliefs Encoded in LLMs
Figure 2 for Evaluating the Moral Beliefs Encoded in LLMs
Figure 3 for Evaluating the Moral Beliefs Encoded in LLMs
Figure 4 for Evaluating the Moral Beliefs Encoded in LLMs
Viaarxiv icon

An Invariant Learning Characterization of Controlled Text Generation

Add code
Bookmark button
Alert button
May 31, 2023
Carolina Zheng, Claudia Shi, Keyon Vafa, Amir Feder, David M. Blei

Figure 1 for An Invariant Learning Characterization of Controlled Text Generation
Figure 2 for An Invariant Learning Characterization of Controlled Text Generation
Figure 3 for An Invariant Learning Characterization of Controlled Text Generation
Viaarxiv icon

Useful Confidence Measures: Beyond the Max Score

Add code
Bookmark button
Alert button
Oct 25, 2022
Gal Yona, Amir Feder, Itay Laish

Figure 1 for Useful Confidence Measures: Beyond the Max Score
Figure 2 for Useful Confidence Measures: Beyond the Max Score
Figure 3 for Useful Confidence Measures: Beyond the Max Score
Figure 4 for Useful Confidence Measures: Beyond the Max Score
Viaarxiv icon

Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions

Add code
Bookmark button
Alert button
Jul 28, 2022
Yanai Elazar, Nora Kassner, Shauli Ravfogel, Amir Feder, Abhilasha Ravichander, Marius Mosbach, Yonatan Belinkov, Hinrich Schütze, Yoav Goldberg

Figure 1 for Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions
Figure 2 for Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions
Figure 3 for Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions
Figure 4 for Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions
Viaarxiv icon

In the Eye of the Beholder: Robust Prediction with Causal User Modeling

Add code
Bookmark button
Alert button
Jun 01, 2022
Amir Feder, Guy Horowitz, Yoav Wald, Roi Reichart, Nir Rosenfeld

Figure 1 for In the Eye of the Beholder: Robust Prediction with Causal User Modeling
Figure 2 for In the Eye of the Beholder: Robust Prediction with Causal User Modeling
Figure 3 for In the Eye of the Beholder: Robust Prediction with Causal User Modeling
Figure 4 for In the Eye of the Beholder: Robust Prediction with Causal User Modeling
Viaarxiv icon

CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior

Add code
Bookmark button
Alert button
May 27, 2022
Eldar David Abraham, Karel D'Oosterlinck, Amir Feder, Yair Ori Gat, Atticus Geiger, Christopher Potts, Roi Reichart, Zhengxuan Wu

Figure 1 for CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Figure 2 for CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Figure 3 for CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Figure 4 for CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Viaarxiv icon