Alert button
Picture for Pang Wei Koh

Pang Wei Koh

Alert button

Information-Theoretic Distillation for Reference-less Summarization

Add code
Bookmark button
Alert button
Mar 20, 2024
Jaehun Jung, Ximing Lu, Liwei Jiang, Faeze Brahman, Peter West, Pang Wei Koh, Yejin Choi

Figure 1 for Information-Theoretic Distillation for Reference-less Summarization
Figure 2 for Information-Theoretic Distillation for Reference-less Summarization
Figure 3 for Information-Theoretic Distillation for Reference-less Summarization
Figure 4 for Information-Theoretic Distillation for Reference-less Summarization
Viaarxiv icon

Reliable, Adaptable, and Attributable Language Models with Retrieval

Add code
Bookmark button
Alert button
Mar 05, 2024
Akari Asai, Zexuan Zhong, Danqi Chen, Pang Wei Koh, Luke Zettlemoyer, Hannaneh Hajishirzi, Wen-tau Yih

Figure 1 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Figure 2 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Figure 3 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Figure 4 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Viaarxiv icon

Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models

Add code
Bookmark button
Alert button
Feb 05, 2024
Zhiyuan Hu, Chumin Liu, Xidong Feng, Yilun Zhao, See-Kiong Ng, Anh Tuan Luu, Junxian He, Pang Wei Koh, Bryan Hooi

Viaarxiv icon

Instructional Fingerprinting of Large Language Models

Add code
Bookmark button
Alert button
Jan 21, 2024
Jiashu Xu, Fei Wang, Mingyu Derek Ma, Pang Wei Koh, Chaowei Xiao, Muhao Chen

Viaarxiv icon

The Generative AI Paradox: "What It Can Create, It May Not Understand"

Add code
Bookmark button
Alert button
Oct 31, 2023
Peter West, Ximing Lu, Nouha Dziri, Faeze Brahman, Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, Yejin Choi

Figure 1 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Figure 2 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Figure 3 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Figure 4 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Viaarxiv icon

OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models

Add code
Bookmark button
Alert button
Aug 07, 2023
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt

Figure 1 for OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
Figure 2 for OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
Figure 3 for OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
Figure 4 for OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
Viaarxiv icon

Are aligned neural networks adversarially aligned?

Add code
Bookmark button
Alert button
Jun 26, 2023
Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramer, Ludwig Schmidt

Figure 1 for Are aligned neural networks adversarially aligned?
Figure 2 for Are aligned neural networks adversarially aligned?
Figure 3 for Are aligned neural networks adversarially aligned?
Figure 4 for Are aligned neural networks adversarially aligned?
Viaarxiv icon

Proximity-Informed Calibration for Deep Neural Networks

Add code
Bookmark button
Alert button
Jun 07, 2023
Miao Xiong, Ailin Deng, Pang Wei Koh, Jiaying Wu, Shen Li, Jianqing Xu, Bryan Hooi

Figure 1 for Proximity-Informed Calibration for Deep Neural Networks
Figure 2 for Proximity-Informed Calibration for Deep Neural Networks
Figure 3 for Proximity-Informed Calibration for Deep Neural Networks
Figure 4 for Proximity-Informed Calibration for Deep Neural Networks
Viaarxiv icon

FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation

Add code
Bookmark button
Alert button
May 23, 2023
Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi

Figure 1 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Figure 2 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Figure 3 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Figure 4 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Viaarxiv icon