Alert button
Picture for Jungo Kasai

Jungo Kasai

Alert button

Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence

Jun 12, 2023
John J. Nay, David Karamardian, Sarah B. Lawsky, Wenting Tao, Meghana Bhat, Raghav Jain, Aaron Travis Lee, Jonathan H. Choi, Jungo Kasai

Figure 1 for Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence
Figure 2 for Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence
Figure 3 for Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence
Figure 4 for Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence

Better understanding of Large Language Models' (LLMs) legal analysis abilities can contribute to improving the efficiency of legal services, governing artificial intelligence, and leveraging LLMs to identify inconsistencies in law. This paper explores LLM capabilities in applying tax law. We choose this area of law because it has a structure that allows us to set up automated validation pipelines across thousands of examples, requires logical reasoning and maths skills, and enables us to test LLM capabilities in a manner relevant to real-world economic lives of citizens and companies. Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release. We experiment with retrieving and utilising the relevant legal authority to assess the impact of providing additional legal context to LLMs. Few-shot prompting, presenting examples of question-answer pairs, is also found to significantly enhance the performance of the most advanced model, GPT-4. The findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels. As LLMs continue to advance, their ability to reason about law autonomously could have significant implications for the legal profession and AI governance.

Viaarxiv icon

Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models

May 23, 2023
Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David R. Mortensen, Noah A. Smith, Yulia Tsvetkov

Figure 1 for Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models
Figure 2 for Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models
Figure 3 for Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models
Figure 4 for Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models

Language models have graduated from being research prototypes to commercialized products offered as web APIs, and recent works have highlighted the multilingual capabilities of these products. The API vendors charge their users based on usage, more specifically on the number of ``tokens'' processed or generated by the underlying language models. What constitutes a token, however, is training data and model dependent with a large variance in the number of tokens required to convey the same information in different languages. In this work, we analyze the effect of this non-uniformity on the fairness of an API's pricing policy across languages. We conduct a systematic analysis of the cost and utility of OpenAI's language model API on multilingual benchmarks in 22 typologically diverse languages. We show evidence that speakers of a large number of the supported languages are overcharged while obtaining poorer results. These speakers tend to also come from regions where the APIs are less affordable to begin with. Through these analyses, we aim to increase transparency around language model APIs' pricing policies and encourage the vendors to make them more equitable.

Viaarxiv icon

Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations

Apr 05, 2023
Jungo Kasai, Yuhei Kasai, Keisuke Sakaguchi, Yutaro Yamada, Dragomir Radev

Figure 1 for Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations
Figure 2 for Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations
Figure 3 for Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations
Figure 4 for Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations

As large language models (LLMs) gain popularity among speakers of diverse languages, we believe that it is crucial to benchmark them to better understand model behaviors, failures, and limitations in languages beyond English. In this work, we evaluate LLM APIs (ChatGPT, GPT-3, and GPT-4) on the Japanese national medical licensing examinations from the past five years, including the current year. Our team comprises native Japanese-speaking NLP researchers and a practicing cardiologist based in Japan. Our experiments show that GPT-4 outperforms ChatGPT and GPT-3 and passes all six years of the exams, highlighting LLMs' potential in a language that is typologically distant from English. However, our evaluation also exposes critical limitations of the current LLM APIs. First, LLMs sometimes select prohibited choices that should be strictly avoided in medical practice in Japan, such as suggesting euthanasia. Further, our analysis shows that the API costs are generally higher and the maximum context size is smaller for Japanese because of the way non-Latin scripts are currently tokenized in the pipeline. We release our benchmark as Igaku QA as well as all model outputs and exam metadata. We hope that our results and benchmark will spur progress on more diverse applications of LLMs. Our benchmark is available at https://github.com/jungokasai/IgakuQA.

* Added results from the March 2023 exam 
Viaarxiv icon

TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering

Mar 28, 2023
Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, Noah A. Smith

Figure 1 for TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering
Figure 2 for TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering
Figure 3 for TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering
Figure 4 for TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering

Despite thousands of researchers, engineers, and artists actively working on improving text-to-image generation models, systems often fail to produce images that accurately align with the text inputs. We introduce TIFA (Text-to-Image Faithfulness evaluation with question Answering), an automatic evaluation metric that measures the faithfulness of a generated image to its text input via visual question answering (VQA). Specifically, given a text input, we automatically generate several question-answer pairs using a language model. We calculate image faithfulness by checking whether existing VQA models can answer these questions using the generated image. TIFA is a reference-free metric that allows for fine-grained and interpretable evaluations of generated images. TIFA also has better correlations with human judgments than existing metrics. Based on this approach, we introduce TIFA v1.0, a benchmark consisting of 4K diverse text inputs and 25K questions across 12 categories (object, counting, etc.). We present a comprehensive evaluation of existing text-to-image models using TIFA v1.0 and highlight the limitations and challenges of current models. For instance, we find that current text-to-image models, despite doing well on color and material, still struggle in counting, spatial relations, and composing multiple objects. We hope our benchmark will help carefully measure the research progress in text-to-image synthesis and provide valuable insights for further research.

Viaarxiv icon

Batch Prompting: Efficient Inference with Large Language Model APIs

Jan 19, 2023
Zhoujun Cheng, Jungo Kasai, Tao Yu

Figure 1 for Batch Prompting: Efficient Inference with Large Language Model APIs
Figure 2 for Batch Prompting: Efficient Inference with Large Language Model APIs
Figure 3 for Batch Prompting: Efficient Inference with Large Language Model APIs
Figure 4 for Batch Prompting: Efficient Inference with Large Language Model APIs

Performing inference on hundreds of thousands of samples with large language models (LLMs) can be computationally and financially costly. We propose batch prompting, a simple alternative prompting approach that enables the LLM to run inference in batches, instead of one sample at a time. Our method reduces both token and time costs while retaining downstream performance. We theoretically demonstrate that under a few-shot in-context learning setting, the inference costs decrease almost inverse linearly with the number of samples in each batch. We extensively validate the effectiveness of batch prompting on ten datasets across commonsense QA, arithmetic reasoning, and NLI/NLU: batch prompting significantly~(up to $5\times$ with six samples in batch) reduces the LLM (Codex) inference token and time costs while achieving better or comparable performance. Our analysis shows that the number of samples in each batch and the complexity of tasks affect its performance. Further, batch prompting can be applied across different LLMs and reasoning methods.

* 18 pages, 9 figures 
Viaarxiv icon

NarrowBERT: Accelerating Masked Language Model Pretraining and Inference

Jan 11, 2023
Haoxin Li, Phillip Keung, Daniel Cheng, Jungo Kasai, Noah A. Smith

Figure 1 for NarrowBERT: Accelerating Masked Language Model Pretraining and Inference
Figure 2 for NarrowBERT: Accelerating Masked Language Model Pretraining and Inference
Figure 3 for NarrowBERT: Accelerating Masked Language Model Pretraining and Inference
Figure 4 for NarrowBERT: Accelerating Masked Language Model Pretraining and Inference

Large-scale language model pretraining is a very successful form of self-supervised learning in natural language processing, but it is increasingly expensive to perform as the models and pretraining corpora have become larger over time. We propose NarrowBERT, a modified transformer encoder that increases the throughput for masked language model pretraining by more than $2\times$. NarrowBERT sparsifies the transformer model such that the self-attention queries and feedforward layers only operate on the masked tokens of each sentence during pretraining, rather than all of the tokens as with the usual transformer encoder. We also show that NarrowBERT increases the throughput at inference time by as much as $3.5\times$ with minimal (or no) performance degradation on sentence encoding tasks like MNLI. Finally, we examine the performance of NarrowBERT on the IMDB and Amazon reviews classification and CoNLL NER tasks and show that it is also comparable to standard BERT performance.

* Under review (ACL Rolling Review) 
Viaarxiv icon

One Embedder, Any Task: Instruction-Finetuned Text Embeddings

Dec 20, 2022
Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu

Figure 1 for One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Figure 2 for One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Figure 3 for One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Figure 4 for One Embedder, Any Task: Instruction-Finetuned Text Embeddings

We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a single embedder that can generate text embeddings tailored to different downstream tasks and domains, without any further training. We first annotate instructions for 330 diverse tasks and train INSTRUCTOR on this multitask mixture with a contrastive loss. We evaluate INSTRUCTOR on 70 embedding evaluation tasks (66 of which are unseen during training), ranging from classification and information retrieval to semantic textual similarity and text generation evaluation. INSTRUCTOR, while having an order of magnitude fewer parameters than the previous best model, achieves state-of-the-art performance, with an average improvement of 3.4% compared to the previous best results on the 70 diverse datasets. Our analysis suggests that INSTRUCTOR is robust to changes in instructions, and that instruction finetuning mitigates the challenge of training a single model on diverse datasets. Our model, code, and data are available at https://instructor-embedding.github.io.

Viaarxiv icon

BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting

Dec 19, 2022
Zheng-Xin Yong, Hailey Schoelkopf, Niklas Muennighoff, Alham Fikri Aji, David Ifeoluwa Adelani, Khalid Almubarak, M Saiful Bari, Lintang Sutawika, Jungo Kasai, Ahmed Baruwa, Genta Indra Winata, Stella Biderman, Dragomir Radev, Vassilina Nikoulina

Figure 1 for BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
Figure 2 for BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
Figure 3 for BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
Figure 4 for BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting

The BLOOM model is a large open-source multilingual language model capable of zero-shot learning, but its pretraining was limited to 46 languages. To improve its zero-shot performance on unseen languages, it is desirable to adapt BLOOM, but previous works have only explored adapting small language models. In this work, we apply existing language adaptation strategies to BLOOM and benchmark its zero-shot prompting performance on eight new languages. We find language adaptation to be effective at improving zero-shot performance in new languages. Surprisingly, adapter-based finetuning is more effective than continued pretraining for large models. In addition, we discover that prompting performance is not significantly affected by language specifics, such as the writing system. It is primarily determined by the size of the language adaptation data. We also add new languages to BLOOMZ, which is a multitask finetuned version of BLOOM capable of following task instructions zero-shot. We find including a new language in the multitask fine-tuning mixture to be the most effective method to teach BLOOMZ a new language. We conclude that with sufficient training data language adaptation can generalize well to diverse languages. Our code is available at \url{https://github.com/bigscience-workshop/multilingual-modeling/}.

Viaarxiv icon