Alert button
Picture for Zhengxuan Wu

Zhengxuan Wu

Alert button

ReCOGS: How Incidental Details of a Logical Form Overshadow an Evaluation of Semantic Interpretation

Add code
Bookmark button
Alert button
Mar 24, 2023
Zhengxuan Wu, Christopher D. Manning, Christopher Potts

Figure 1 for ReCOGS: How Incidental Details of a Logical Form Overshadow an Evaluation of Semantic Interpretation
Figure 2 for ReCOGS: How Incidental Details of a Logical Form Overshadow an Evaluation of Semantic Interpretation
Figure 3 for ReCOGS: How Incidental Details of a Logical Form Overshadow an Evaluation of Semantic Interpretation
Figure 4 for ReCOGS: How Incidental Details of a Logical Form Overshadow an Evaluation of Semantic Interpretation
Viaarxiv icon

Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations

Add code
Bookmark button
Alert button
Mar 05, 2023
Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, Noah D. Goodman

Figure 1 for Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations
Figure 2 for Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations
Figure 3 for Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations
Figure 4 for Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations
Viaarxiv icon

Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training

Add code
Bookmark button
Alert button
Dec 19, 2022
Jing Huang, Zhengxuan Wu, Kyle Mahowald, Christopher Potts

Figure 1 for Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training
Figure 2 for Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training
Figure 3 for Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training
Figure 4 for Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training
Viaarxiv icon

Causal Proxy Models for Concept-Based Model Explanations

Add code
Bookmark button
Alert button
Sep 28, 2022
Zhengxuan Wu, Karel D'Oosterlinck, Atticus Geiger, Amir Zur, Christopher Potts

Figure 1 for Causal Proxy Models for Concept-Based Model Explanations
Figure 2 for Causal Proxy Models for Concept-Based Model Explanations
Figure 3 for Causal Proxy Models for Concept-Based Model Explanations
Figure 4 for Causal Proxy Models for Concept-Based Model Explanations
Viaarxiv icon

ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time

Add code
Bookmark button
Alert button
Jul 03, 2022
Tailin Wu, Megan Tjandrasuwita, Zhengxuan Wu, Xuelin Yang, Kevin Liu, Rok Sosič, Jure Leskovec

Figure 1 for ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time
Figure 2 for ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time
Figure 3 for ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time
Figure 4 for ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time
Viaarxiv icon

CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior

Add code
Bookmark button
Alert button
May 27, 2022
Eldar David Abraham, Karel D'Oosterlinck, Amir Feder, Yair Ori Gat, Atticus Geiger, Christopher Potts, Roi Reichart, Zhengxuan Wu

Figure 1 for CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Figure 2 for CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Figure 3 for CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Figure 4 for CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Viaarxiv icon

Oolong: Investigating What Makes Crosslingual Transfer Hard with Controlled Studies

Add code
Bookmark button
Alert button
Feb 24, 2022
Zhengxuan Wu, Isabel Papadimitriou, Alex Tamkin

Figure 1 for Oolong: Investigating What Makes Crosslingual Transfer Hard with Controlled Studies
Figure 2 for Oolong: Investigating What Makes Crosslingual Transfer Hard with Controlled Studies
Figure 3 for Oolong: Investigating What Makes Crosslingual Transfer Hard with Controlled Studies
Figure 4 for Oolong: Investigating What Makes Crosslingual Transfer Hard with Controlled Studies
Viaarxiv icon

Causal Distillation for Language Models

Add code
Bookmark button
Alert button
Dec 05, 2021
Zhengxuan Wu, Atticus Geiger, Josh Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, Noah D. Goodman

Figure 1 for Causal Distillation for Language Models
Figure 2 for Causal Distillation for Language Models
Figure 3 for Causal Distillation for Language Models
Figure 4 for Causal Distillation for Language Models
Viaarxiv icon

Inducing Causal Structure for Interpretable Neural Networks

Add code
Bookmark button
Alert button
Dec 01, 2021
Atticus Geiger, Zhengxuan Wu, Hanson Lu, Josh Rozner, Elisa Kreiss, Thomas Icard, Noah D. Goodman, Christopher Potts

Figure 1 for Inducing Causal Structure for Interpretable Neural Networks
Figure 2 for Inducing Causal Structure for Interpretable Neural Networks
Figure 3 for Inducing Causal Structure for Interpretable Neural Networks
Figure 4 for Inducing Causal Structure for Interpretable Neural Networks
Viaarxiv icon