Alert button
Picture for Joel Jang

Joel Jang

Alert button

Semiparametric Token-Sequence Co-Supervision

Add code
Bookmark button
Alert button
Mar 14, 2024
Hyunji Lee, Doyoung Kim, Jihoon Jun, Sejune Joo, Joel Jang, Kyoung-Woon On, Minjoon Seo

Figure 1 for Semiparametric Token-Sequence Co-Supervision
Figure 2 for Semiparametric Token-Sequence Co-Supervision
Figure 3 for Semiparametric Token-Sequence Co-Supervision
Figure 4 for Semiparametric Token-Sequence Co-Supervision
Viaarxiv icon

LangBridge: Multilingual Reasoning Without Multilingual Supervision

Add code
Bookmark button
Alert button
Jan 19, 2024
Dongkeun Yoon, Joel Jang, Sungdong Kim, Seungone Kim, Sheikh Shafayat, Minjoon Seo

Viaarxiv icon

Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2

Add code
Bookmark button
Alert button
Nov 20, 2023
Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Beltagy, Hannaneh Hajishirzi

Figure 1 for Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Figure 2 for Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Figure 3 for Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Figure 4 for Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Viaarxiv icon

How Well Do Large Language Models Truly Ground?

Add code
Bookmark button
Alert button
Nov 15, 2023
Hyunji Lee, Sejune Joo, Chaeeun Kim, Joel Jang, Doyoung Kim, Kyoung-Woon On, Minjoon Seo

Viaarxiv icon

Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging

Add code
Bookmark button
Alert button
Oct 17, 2023
Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu

Viaarxiv icon

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models

Add code
Bookmark button
Alert button
Oct 12, 2023
Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo

Viaarxiv icon

Gradient Ascent Post-training Enhances Language Model Generalization

Add code
Bookmark button
Alert button
Jun 12, 2023
Dongkeun Yoon, Joel Jang, Sungdong Kim, Minjoon Seo

Figure 1 for Gradient Ascent Post-training Enhances Language Model Generalization
Figure 2 for Gradient Ascent Post-training Enhances Language Model Generalization
Figure 3 for Gradient Ascent Post-training Enhances Language Model Generalization
Figure 4 for Gradient Ascent Post-training Enhances Language Model Generalization
Viaarxiv icon

Continually Updating Generative Retrieval on Dynamic Corpora

Add code
Bookmark button
Alert button
May 27, 2023
Soyoung Yoon, Chaeeun Kim, Hyunji Lee, Joel Jang, Minjoon Seo

Figure 1 for Continually Updating Generative Retrieval on Dynamic Corpora
Figure 2 for Continually Updating Generative Retrieval on Dynamic Corpora
Figure 3 for Continually Updating Generative Retrieval on Dynamic Corpora
Viaarxiv icon

Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis

Add code
Bookmark button
Alert button
May 24, 2023
Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo

Figure 1 for Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis
Figure 2 for Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis
Figure 3 for Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis
Figure 4 for Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis
Viaarxiv icon

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

Add code
Bookmark button
Alert button
May 23, 2023
Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

Figure 1 for The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Figure 2 for The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Figure 3 for The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Figure 4 for The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Viaarxiv icon