Alert button
Picture for Mark Yatskar

Mark Yatskar

Alert button

CoMo: Controllable Motion Generation through Language Guided Pose Code Editing

Add code
Bookmark button
Alert button
Mar 20, 2024
Yiming Huang, Weilin Wan, Yue Yang, Chris Callison-Burch, Mark Yatskar, Lingjie Liu

Figure 1 for CoMo: Controllable Motion Generation through Language Guided Pose Code Editing
Figure 2 for CoMo: Controllable Motion Generation through Language Guided Pose Code Editing
Figure 3 for CoMo: Controllable Motion Generation through Language Guided Pose Code Editing
Figure 4 for CoMo: Controllable Motion Generation through Language Guided Pose Code Editing
Viaarxiv icon

Holodeck: Language Guided Generation of 3D Embodied AI Environments

Add code
Bookmark button
Alert button
Dec 14, 2023
Yue Yang, Fan-Yun Sun, Luca Weihs, Eli VanderBilt, Alvaro Herrasti, Winson Han, Jiajun Wu, Nick Haber, Ranjay Krishna, Lingjie Liu, Chris Callison-Burch, Mark Yatskar, Aniruddha Kembhavi, Christopher Clark

Figure 1 for Holodeck: Language Guided Generation of 3D Embodied AI Environments
Figure 2 for Holodeck: Language Guided Generation of 3D Embodied AI Environments
Figure 3 for Holodeck: Language Guided Generation of 3D Embodied AI Environments
Figure 4 for Holodeck: Language Guided Generation of 3D Embodied AI Environments
Viaarxiv icon

Pachinko: Patching Interpretable QA Models through Natural Language Feedback

Add code
Bookmark button
Alert button
Nov 16, 2023
Chaitanya Malaviya, Subin Lee, Dan Roth, Mark Yatskar

Viaarxiv icon

Interpretable-by-Design Text Classification with Iteratively Generated Concept Bottleneck

Add code
Bookmark button
Alert button
Oct 30, 2023
Josh Magnus Ludan, Qing Lyu, Yue Yang, Liam Dugan, Mark Yatskar, Chris Callison-Burch

Figure 1 for Interpretable-by-Design Text Classification with Iteratively Generated Concept Bottleneck
Figure 2 for Interpretable-by-Design Text Classification with Iteratively Generated Concept Bottleneck
Figure 3 for Interpretable-by-Design Text Classification with Iteratively Generated Concept Bottleneck
Figure 4 for Interpretable-by-Design Text Classification with Iteratively Generated Concept Bottleneck
Viaarxiv icon

ExpertQA: Expert-Curated Questions and Attributed Answers

Add code
Bookmark button
Alert button
Sep 14, 2023
Chaitanya Malaviya, Subin Lee, Sihao Chen, Elizabeth Sieber, Mark Yatskar, Dan Roth

Figure 1 for ExpertQA: Expert-Curated Questions and Attributed Answers
Figure 2 for ExpertQA: Expert-Curated Questions and Attributed Answers
Figure 3 for ExpertQA: Expert-Curated Questions and Attributed Answers
Figure 4 for ExpertQA: Expert-Curated Questions and Attributed Answers
Viaarxiv icon

Interpretable by Design Visual Question Answering

Add code
Bookmark button
Alert button
May 24, 2023
Xingyu Fu, Ben Zhou, Sihao Chen, Mark Yatskar, Dan Roth

Figure 1 for Interpretable by Design Visual Question Answering
Figure 2 for Interpretable by Design Visual Question Answering
Figure 3 for Interpretable by Design Visual Question Answering
Figure 4 for Interpretable by Design Visual Question Answering
Viaarxiv icon

AmbiCoref: Evaluating Human and Model Sensitivity to Ambiguous Coreference

Add code
Bookmark button
Alert button
Feb 03, 2023
Yuewei Yuan, Chaitanya Malaviya, Mark Yatskar

Figure 1 for AmbiCoref: Evaluating Human and Model Sensitivity to Ambiguous Coreference
Figure 2 for AmbiCoref: Evaluating Human and Model Sensitivity to Ambiguous Coreference
Figure 3 for AmbiCoref: Evaluating Human and Model Sensitivity to Ambiguous Coreference
Figure 4 for AmbiCoref: Evaluating Human and Model Sensitivity to Ambiguous Coreference
Viaarxiv icon

Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification

Add code
Bookmark button
Alert button
Nov 21, 2022
Yue Yang, Artemis Panagopoulou, Shenghao Zhou, Daniel Jin, Chris Callison-Burch, Mark Yatskar

Figure 1 for Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
Figure 2 for Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
Figure 3 for Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
Figure 4 for Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
Viaarxiv icon

Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models

Add code
Bookmark button
Alert button
Oct 24, 2022
Chaitanya Malaviya, Sudeep Bhatia, Mark Yatskar

Figure 1 for Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models
Figure 2 for Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models
Figure 3 for Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models
Figure 4 for Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models
Viaarxiv icon