Alert button
Picture for Kellie Webster

Kellie Webster

Alert button

Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models

Add code
Bookmark button
Alert button
Dec 15, 2022
Bernd Bohnet, Vinh Q. Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, Tom Kwiatkowski, Ji Ma, Jianmo Ni, Tal Schuster, William W. Cohen, Michael Collins, Dipanjan Das, Donald Metzler, Slav Petrov, Kellie Webster

Figure 1 for Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models
Figure 2 for Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models
Figure 3 for Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models
Figure 4 for Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models
Viaarxiv icon

Query Refinement Prompts for Closed-Book Long-Form Question Answering

Add code
Bookmark button
Alert button
Oct 31, 2022
Reinald Kim Amplayo, Kellie Webster, Michael Collins, Dipanjan Das, Shashi Narayan

Figure 1 for Query Refinement Prompts for Closed-Book Long-Form Question Answering
Figure 2 for Query Refinement Prompts for Closed-Book Long-Form Question Answering
Figure 3 for Query Refinement Prompts for Closed-Book Long-Form Question Answering
Figure 4 for Query Refinement Prompts for Closed-Book Long-Form Question Answering
Viaarxiv icon

Flexible text generation for counterfactual fairness probing

Add code
Bookmark button
Alert button
Jun 28, 2022
Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster

Figure 1 for Flexible text generation for counterfactual fairness probing
Figure 2 for Flexible text generation for counterfactual fairness probing
Figure 3 for Flexible text generation for counterfactual fairness probing
Figure 4 for Flexible text generation for counterfactual fairness probing
Viaarxiv icon

GLaM: Efficient Scaling of Language Models with Mixture-of-Experts

Add code
Bookmark button
Alert button
Dec 13, 2021
Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathy Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng Chen, Claire Cui

Figure 1 for GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Figure 2 for GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Figure 3 for GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Figure 4 for GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Viaarxiv icon

Towards Deconfounding the Influence of Subject's Demographic Characteristics in Question Answering

Add code
Bookmark button
Alert button
Apr 15, 2021
Maharshi Gor, Kellie Webster, Jordan Boyd-Graber

Figure 1 for Towards Deconfounding the Influence of Subject's Demographic Characteristics in Question Answering
Figure 2 for Towards Deconfounding the Influence of Subject's Demographic Characteristics in Question Answering
Figure 3 for Towards Deconfounding the Influence of Subject's Demographic Characteristics in Question Answering
Figure 4 for Towards Deconfounding the Influence of Subject's Demographic Characteristics in Question Answering
Viaarxiv icon

How to Write a Bias Statement: Recommendations for Submissions to the Workshop on Gender Bias in NLP

Add code
Bookmark button
Alert button
Apr 07, 2021
Christian Hardmeier, Marta R. Costa-jussà, Kellie Webster, Will Radford, Su Lin Blodgett

Viaarxiv icon

They, Them, Theirs: Rewriting with Gender-Neutral English

Add code
Bookmark button
Alert button
Feb 12, 2021
Tony Sun, Kellie Webster, Apu Shah, William Yang Wang, Melvin Johnson

Figure 1 for They, Them, Theirs: Rewriting with Gender-Neutral English
Figure 2 for They, Them, Theirs: Rewriting with Gender-Neutral English
Figure 3 for They, Them, Theirs: Rewriting with Gender-Neutral English
Figure 4 for They, Them, Theirs: Rewriting with Gender-Neutral English
Viaarxiv icon

Underspecification Presents Challenges for Credibility in Modern Machine Learning

Add code
Bookmark button
Alert button
Nov 06, 2020
Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley

Figure 1 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Figure 2 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Figure 3 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Figure 4 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Viaarxiv icon

Measuring and Reducing Gendered Correlations in Pre-trained Models

Add code
Bookmark button
Alert button
Oct 12, 2020
Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Slav Petrov

Figure 1 for Measuring and Reducing Gendered Correlations in Pre-trained Models
Figure 2 for Measuring and Reducing Gendered Correlations in Pre-trained Models
Figure 3 for Measuring and Reducing Gendered Correlations in Pre-trained Models
Figure 4 for Measuring and Reducing Gendered Correlations in Pre-trained Models
Viaarxiv icon

Type B Reflexivization as an Unambiguous Testbed for Multilingual Multi-Task Gender Bias

Add code
Bookmark button
Alert button
Sep 28, 2020
Ana Valeria Gonzalez, Maria Barrett, Rasmus Hvingelby, Kellie Webster, Anders Søgaard

Figure 1 for Type B Reflexivization as an Unambiguous Testbed for Multilingual Multi-Task Gender Bias
Figure 2 for Type B Reflexivization as an Unambiguous Testbed for Multilingual Multi-Task Gender Bias
Figure 3 for Type B Reflexivization as an Unambiguous Testbed for Multilingual Multi-Task Gender Bias
Figure 4 for Type B Reflexivization as an Unambiguous Testbed for Multilingual Multi-Task Gender Bias
Viaarxiv icon