Alert button
Picture for Yanda Chen

Yanda Chen

Alert button

Department of Computer Science, Columbia University

Social Orientation: A New Feature for Dialogue Analysis

Add code
Bookmark button
Alert button
Feb 26, 2024
Todd Morrill, Zhaoyuan Deng, Yanda Chen, Amith Ananthram, Colin Wayne Leach, Kathleen McKeown

Figure 1 for Social Orientation: A New Feature for Dialogue Analysis
Figure 2 for Social Orientation: A New Feature for Dialogue Analysis
Figure 3 for Social Orientation: A New Feature for Dialogue Analysis
Figure 4 for Social Orientation: A New Feature for Dialogue Analysis
Viaarxiv icon

Parallel Structures in Pre-training Data Yield In-Context Learning

Add code
Bookmark button
Alert button
Feb 19, 2024
Yanda Chen, Chen Zhao, Zhou Yu, Kathleen McKeown, He He

Viaarxiv icon

Towards Consistent Natural-Language Explanations via Explanation-Consistency Finetuning

Add code
Bookmark button
Alert button
Jan 25, 2024
Yanda Chen, Chandan Singh, Xiaodong Liu, Simiao Zuo, Bin Yu, He He, Jianfeng Gao

Viaarxiv icon

Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations

Add code
Bookmark button
Alert button
Jul 17, 2023
Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, Kathleen McKeown

Figure 1 for Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations
Figure 2 for Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations
Figure 3 for Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations
Figure 4 for Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations
Viaarxiv icon

In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models

Add code
Bookmark button
Alert button
Dec 20, 2022
Yukun Huang, Yanda Chen, Zhou Yu, Kathleen McKeown

Figure 1 for In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models
Figure 2 for In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models
Figure 3 for In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models
Figure 4 for In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models
Viaarxiv icon

On the Relation between Sensitivity and Accuracy in In-context Learning

Add code
Bookmark button
Alert button
Sep 16, 2022
Yanda Chen, Chen Zhao, Zhou Yu, Kathleen McKeown, He He

Figure 1 for On the Relation between Sensitivity and Accuracy in In-context Learning
Figure 2 for On the Relation between Sensitivity and Accuracy in In-context Learning
Figure 3 for On the Relation between Sensitivity and Accuracy in In-context Learning
Figure 4 for On the Relation between Sensitivity and Accuracy in In-context Learning
Viaarxiv icon

Meta-learning via Language Model In-context Tuning

Add code
Bookmark button
Alert button
Oct 15, 2021
Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, He He

Figure 1 for Meta-learning via Language Model In-context Tuning
Figure 2 for Meta-learning via Language Model In-context Tuning
Figure 3 for Meta-learning via Language Model In-context Tuning
Figure 4 for Meta-learning via Language Model In-context Tuning
Viaarxiv icon

Cross-language Sentence Selection via Data Augmentation and Rationale Training

Add code
Bookmark button
Alert button
Jun 04, 2021
Yanda Chen, Chris Kedzie, Suraj Nair, Petra Galuščáková, Rui Zhang, Douglas W. Oard, Kathleen McKeown

Figure 1 for Cross-language Sentence Selection via Data Augmentation and Rationale Training
Figure 2 for Cross-language Sentence Selection via Data Augmentation and Rationale Training
Figure 3 for Cross-language Sentence Selection via Data Augmentation and Rationale Training
Figure 4 for Cross-language Sentence Selection via Data Augmentation and Rationale Training
Viaarxiv icon

Improved Synthetic Training for Reading Comprehension

Add code
Bookmark button
Alert button
Oct 24, 2020
Yanda Chen, Md Arafat Sultan, Vittorio Castelli

Figure 1 for Improved Synthetic Training for Reading Comprehension
Figure 2 for Improved Synthetic Training for Reading Comprehension
Figure 3 for Improved Synthetic Training for Reading Comprehension
Figure 4 for Improved Synthetic Training for Reading Comprehension
Viaarxiv icon

Detecting and Reducing Bias in a High Stakes Domain

Add code
Bookmark button
Alert button
Aug 29, 2019
Ruiqi Zhong, Yanda Chen, Desmond Patton, Charlotte Selous, Kathy McKeown

Figure 1 for Detecting and Reducing Bias in a High Stakes Domain
Figure 2 for Detecting and Reducing Bias in a High Stakes Domain
Figure 3 for Detecting and Reducing Bias in a High Stakes Domain
Figure 4 for Detecting and Reducing Bias in a High Stakes Domain
Viaarxiv icon