Alert button
Picture for Mahima Pushkarna

Mahima Pushkarna

Alert button

Google Research

LLM Comparator: Visual Analytics for Side-by-Side Evaluation of Large Language Models

Add code
Bookmark button
Alert button
Feb 16, 2024
Minsuk Kahng, Ian Tenney, Mahima Pushkarna, Michael Xieyang Liu, James Wexler, Emily Reif, Krystal Kallarackal, Minsuk Chang, Michael Terry, Lucas Dixon

Viaarxiv icon

ConstitutionMaker: Interactively Critiquing Large Language Models by Converting Feedback into Principles

Add code
Bookmark button
Alert button
Oct 24, 2023
Savvas Petridis, Ben Wedin, James Wexler, Aaron Donsbach, Mahima Pushkarna, Nitesh Goyal, Carrie J. Cai, Michael Terry

Figure 1 for ConstitutionMaker: Interactively Critiquing Large Language Models by Converting Feedback into Principles
Figure 2 for ConstitutionMaker: Interactively Critiquing Large Language Models by Converting Feedback into Principles
Figure 3 for ConstitutionMaker: Interactively Critiquing Large Language Models by Converting Feedback into Principles
Figure 4 for ConstitutionMaker: Interactively Critiquing Large Language Models by Converting Feedback into Principles
Viaarxiv icon

GEMv2: Multilingual NLG Benchmarking in a Single Line of Code

Add code
Bookmark button
Alert button
Jun 24, 2022
Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina McMillan-Major, Anna Shvets, Ashish Upadhyay, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, João Sedoc, Juraj Juraska, Kaustubh Dhole, Khyathi Raghavi Chandu, Laura Perez-Beltrachini, Leonardo F. R. Ribeiro, Lewis Tunstall, Li Zhang, Mahima Pushkarna, Mathias Creutz, Michael White, Mihir Sanjay Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qi Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja Štajner, Sebastien Montella, Shailza, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin Adewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Ying Xu, Yisi Sang, Yixin Liu, Yufang Hou

Figure 1 for GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
Figure 2 for GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
Figure 3 for GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
Figure 4 for GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
Viaarxiv icon

Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI

Add code
Bookmark button
Alert button
Apr 03, 2022
Mahima Pushkarna, Andrew Zaldivar, Oddur Kjartansson

Figure 1 for Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI
Figure 2 for Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI
Figure 3 for Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI
Figure 4 for Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI
Viaarxiv icon

Healthsheet: Development of a Transparency Artifact for Health Datasets

Add code
Bookmark button
Alert button
Feb 26, 2022
Negar Rostamzadeh, Diana Mincu, Subhrajit Roy, Andrew Smart, Lauren Wilcox, Mahima Pushkarna, Jessica Schrouff, Razvan Amironesei, Nyalleng Moorosi, Katherine Heller

Figure 1 for Healthsheet: Development of a Transparency Artifact for Health Datasets
Viaarxiv icon

The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models

Add code
Bookmark button
Alert button
Aug 12, 2020
Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, Ann Yuan

Figure 1 for The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
Figure 2 for The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
Figure 3 for The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
Figure 4 for The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
Viaarxiv icon

The What-If Tool: Interactive Probing of Machine Learning Models

Add code
Bookmark button
Alert button
Jul 09, 2019
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viegas, Jimbo Wilson

Figure 1 for The What-If Tool: Interactive Probing of Machine Learning Models
Figure 2 for The What-If Tool: Interactive Probing of Machine Learning Models
Figure 3 for The What-If Tool: Interactive Probing of Machine Learning Models
Figure 4 for The What-If Tool: Interactive Probing of Machine Learning Models
Viaarxiv icon