Picture for Amy X. Zhang

Amy X. Zhang

Penalizing Transparency? How AI Disclosure and Author Demographics Shape Human and AI Judgments About Writing

Add code
Jul 02, 2025
Viaarxiv icon

Levels of Autonomy for AI Agents

Add code
Jun 14, 2025
Viaarxiv icon

Cocoa: Co-Planning and Co-Execution with AI Agents

Add code
Dec 14, 2024
Figure 1 for Cocoa: Co-Planning and Co-Execution with AI Agents
Figure 2 for Cocoa: Co-Planning and Co-Execution with AI Agents
Figure 3 for Cocoa: Co-Planning and Co-Execution with AI Agents
Figure 4 for Cocoa: Co-Planning and Co-Execution with AI Agents
Viaarxiv icon

SPICA: Retrieving Scenarios for Pluralistic In-Context Alignment

Add code
Nov 16, 2024
Figure 1 for SPICA: Retrieving Scenarios for Pluralistic In-Context Alignment
Figure 2 for SPICA: Retrieving Scenarios for Pluralistic In-Context Alignment
Figure 3 for SPICA: Retrieving Scenarios for Pluralistic In-Context Alignment
Figure 4 for SPICA: Retrieving Scenarios for Pluralistic In-Context Alignment
Viaarxiv icon

Chain of Alignment: Integrating Public Will with Expert Intelligence for Language Model Alignment

Add code
Nov 15, 2024
Figure 1 for Chain of Alignment: Integrating Public Will with Expert Intelligence for Language Model Alignment
Figure 2 for Chain of Alignment: Integrating Public Will with Expert Intelligence for Language Model Alignment
Figure 3 for Chain of Alignment: Integrating Public Will with Expert Intelligence for Language Model Alignment
Figure 4 for Chain of Alignment: Integrating Public Will with Expert Intelligence for Language Model Alignment
Viaarxiv icon

LLMs as Research Tools: A Large Scale Survey of Researchers' Usage and Perceptions

Add code
Oct 30, 2024
Figure 1 for LLMs as Research Tools: A Large Scale Survey of Researchers' Usage and Perceptions
Figure 2 for LLMs as Research Tools: A Large Scale Survey of Researchers' Usage and Perceptions
Figure 3 for LLMs as Research Tools: A Large Scale Survey of Researchers' Usage and Perceptions
Figure 4 for LLMs as Research Tools: A Large Scale Survey of Researchers' Usage and Perceptions
Viaarxiv icon

Language Models as Critical Thinking Tools: A Case Study of Philosophers

Add code
Apr 06, 2024
Viaarxiv icon

Correcting misinformation on social media with a large language model

Add code
Mar 17, 2024
Viaarxiv icon

I Am Not a Lawyer, But: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice

Add code
Feb 02, 2024
Figure 1 for I Am Not a Lawyer, But: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice
Figure 2 for I Am Not a Lawyer, But: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice
Figure 3 for I Am Not a Lawyer, But: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice
Figure 4 for I Am Not a Lawyer, But: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice
Viaarxiv icon

Case Repositories: Towards Case-Based Reasoning for AI Alignment

Add code
Nov 26, 2023
Figure 1 for Case Repositories: Towards Case-Based Reasoning for AI Alignment
Figure 2 for Case Repositories: Towards Case-Based Reasoning for AI Alignment
Figure 3 for Case Repositories: Towards Case-Based Reasoning for AI Alignment
Viaarxiv icon