Alert button
Picture for Hannah Brown

Hannah Brown

Alert button

Can AI Be as Creative as Humans?

Add code
Bookmark button
Alert button
Jan 12, 2024
Haonan Wang, James Zou, Michael Mozer, Anirudh Goyal, Alex Lamb, Linjun Zhang, Weijie J Su, Zhun Deng, Michael Qizhe Xie, Hannah Brown, Kenji Kawaguchi

Viaarxiv icon

Prompt Optimization via Adversarial In-Context Learning

Add code
Bookmark button
Alert button
Dec 05, 2023
Xuan Long Do, Yiran Zhao, Hannah Brown, Yuxi Xie, James Xu Zhao, Nancy F. Chen, Kenji Kawaguchi, Michael Qizhe Xie, Junxian He

Viaarxiv icon

AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments

Add code
Bookmark button
Alert button
Oct 10, 2023
Yang Zhang, Yawei Li, Hannah Brown, Mina Rezaei, Bernd Bischl, Philip Torr, Ashkan Khakzar, Kenji Kawaguchi

Figure 1 for AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments
Figure 2 for AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments
Figure 3 for AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments
Figure 4 for AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments
Viaarxiv icon

Towards Regulatable AI Systems: Technical Gaps and Policy Opportunities

Add code
Bookmark button
Alert button
Jun 22, 2023
Xudong Shen, Hannah Brown, Jiashu Tao, Martin Strobel, Yao Tong, Akshay Narayan, Harold Soh, Finale Doshi-Velez

Viaarxiv icon

What Does it Mean for a Language Model to Preserve Privacy?

Add code
Bookmark button
Alert button
Feb 14, 2022
Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, Florian Tramèr

Figure 1 for What Does it Mean for a Language Model to Preserve Privacy?
Figure 2 for What Does it Mean for a Language Model to Preserve Privacy?
Figure 3 for What Does it Mean for a Language Model to Preserve Privacy?
Viaarxiv icon