Alert button
Picture for Daniel E. Ho

Daniel E. Ho

Alert button

FLawN-T5: An Empirical Examination of Effective Instruction-Tuning Data Mixtures for Legal Reasoning

Add code
Bookmark button
Alert button
Apr 02, 2024
Joel Niklaus, Lucia Zheng, Arya D. McCarthy, Christopher Hahn, Brian M. Rosen, Peter Henderson, Daniel E. Ho, Garrett Honke, Percy Liang, Christopher Manning

Viaarxiv icon

On the Societal Impact of Open Foundation Models

Add code
Bookmark button
Alert button
Feb 27, 2024
Sayash Kapoor, Rishi Bommasani, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Peter Cihon, Aspen Hopkins, Kevin Bankston, Stella Biderman, Miranda Bogen, Rumman Chowdhury, Alex Engler, Peter Henderson, Yacine Jernite, Seth Lazar, Stefano Maffulli, Alondra Nelson, Joelle Pineau, Aviya Skowron, Dawn Song, Victor Storchan, Daniel Zhang, Daniel E. Ho, Percy Liang, Arvind Narayanan

Figure 1 for On the Societal Impact of Open Foundation Models
Figure 2 for On the Societal Impact of Open Foundation Models
Viaarxiv icon

How well do LLMs cite relevant medical references? An evaluation framework and analyses

Add code
Bookmark button
Alert button
Feb 03, 2024
Kevin Wu, Eric Wu, Ally Cassasola, Angela Zhang, Kevin Wei, Teresa Nguyen, Sith Riantawan, Patricia Shi Riantawan, Daniel E. Ho, James Zou

Viaarxiv icon

Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models

Add code
Bookmark button
Alert button
Jan 02, 2024
Matthew Dahl, Varun Magesh, Mirac Suzgun, Daniel E. Ho

Viaarxiv icon

Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features

Add code
Bookmark button
Alert button
Oct 02, 2023
Hadi Elzayn, Emily Black, Patrick Vossler, Nathanael Jo, Jacob Goldin, Daniel E. Ho

Figure 1 for Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features
Figure 2 for Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features
Figure 3 for Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features
Figure 4 for Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features
Viaarxiv icon

Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools

Add code
Bookmark button
Alert button
Sep 29, 2023
Emily Black, Rakshit Naidu, Rayid Ghani, Kit T. Rodolfa, Daniel E. Ho, Hoda Heidari

Figure 1 for Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools
Figure 2 for Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools
Figure 3 for Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools
Figure 4 for Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools
Viaarxiv icon

LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models

Add code
Bookmark button
Alert button
Aug 20, 2023
Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel N. Rockmore, Diego Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li

Figure 1 for LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Figure 2 for LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Figure 3 for LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Figure 4 for LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Viaarxiv icon

SCALE: Scaling up the Complexity for Advanced Language Model Evaluation

Add code
Bookmark button
Alert button
Jun 15, 2023
Vishvaksenan Rasiah, Ronja Stern, Veton Matoshi, Matthias Stürmer, Ilias Chalkidis, Daniel E. Ho, Joel Niklaus

Figure 1 for SCALE: Scaling up the Complexity for Advanced Language Model Evaluation
Figure 2 for SCALE: Scaling up the Complexity for Advanced Language Model Evaluation
Figure 3 for SCALE: Scaling up the Complexity for Advanced Language Model Evaluation
Figure 4 for SCALE: Scaling up the Complexity for Advanced Language Model Evaluation
Viaarxiv icon

MultiLegalPile: A 689GB Multilingual Legal Corpus

Add code
Bookmark button
Alert button
Jun 06, 2023
Joel Niklaus, Veton Matoshi, Matthias Stürmer, Ilias Chalkidis, Daniel E. Ho

Figure 1 for MultiLegalPile: A 689GB Multilingual Legal Corpus
Figure 2 for MultiLegalPile: A 689GB Multilingual Legal Corpus
Figure 3 for MultiLegalPile: A 689GB Multilingual Legal Corpus
Figure 4 for MultiLegalPile: A 689GB Multilingual Legal Corpus
Viaarxiv icon

LegalBench: Prototyping a Collaborative Benchmark for Legal Reasoning

Add code
Bookmark button
Alert button
Sep 13, 2022
Neel Guha, Daniel E. Ho, Julian Nyarko, Christopher Ré

Figure 1 for LegalBench: Prototyping a Collaborative Benchmark for Legal Reasoning
Figure 2 for LegalBench: Prototyping a Collaborative Benchmark for Legal Reasoning
Figure 3 for LegalBench: Prototyping a Collaborative Benchmark for Legal Reasoning
Figure 4 for LegalBench: Prototyping a Collaborative Benchmark for Legal Reasoning
Viaarxiv icon