Alert button
Picture for Sang-goo Lee

Sang-goo Lee

Alert button

Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP

Oct 23, 2023
Hyuhng Joon Kim, Hyunsoo Cho, Sang-Woo Lee, Junyeob Kim, Choonghyun Park, Sang-goo Lee, Kang Min Yoo, Taeuk Kim

When deploying machine learning systems to the wild, it is highly desirable for them to effectively leverage prior knowledge to the unfamiliar domain while also firing alarms to anomalous inputs. In order to address these requirements, Universal Domain Adaptation (UniDA) has emerged as a novel research area in computer vision, focusing on achieving both adaptation ability and robustness (i.e., the ability to detect out-of-distribution samples). While UniDA has led significant progress in computer vision, its application on language input still needs to be explored despite its feasibility. In this paper, we propose a comprehensive benchmark for natural language that offers thorough viewpoints of the model's generalizability and robustness. Our benchmark encompasses multiple datasets with varying difficulty levels and characteristics, including temporal shifts and diverse domains. On top of our testbed, we validate existing UniDA methods from computer vision and state-of-the-art domain adaptation techniques from NLP literature, yielding valuable findings: We observe that UniDA methods originally designed for image input can be effectively transferred to the natural language domain while also underscoring the effect of adaptation difficulty in determining the model's performance.

* Findings of EMNLP 2023 
Viaarxiv icon

CELDA: Leveraging Black-box Language Model as Enhanced Classifier without Labels

Jun 09, 2023
Hyunsoo Cho, Youna Kim, Sang-goo Lee

Figure 1 for CELDA: Leveraging Black-box Language Model as Enhanced Classifier without Labels
Figure 2 for CELDA: Leveraging Black-box Language Model as Enhanced Classifier without Labels
Figure 3 for CELDA: Leveraging Black-box Language Model as Enhanced Classifier without Labels
Figure 4 for CELDA: Leveraging Black-box Language Model as Enhanced Classifier without Labels

Utilizing language models (LMs) without internal access is becoming an attractive paradigm in the field of NLP as many cutting-edge LMs are released through APIs and boast a massive scale. The de-facto method in this type of black-box scenario is known as prompting, which has shown progressive performance enhancements in situations where data labels are scarce or unavailable. Despite their efficacy, they still fall short in comparison to fully supervised counterparts and are generally brittle to slight modifications. In this paper, we propose Clustering-enhanced Linear Discriminative Analysis, a novel approach that improves the text classification accuracy with a very weak-supervision signal (i.e., name of the labels). Our framework draws a precise decision boundary without accessing weights or gradients of the LM model or data labels. The core ideas of CELDA are twofold: (1) extracting a refined pseudo-labeled dataset from an unlabeled dataset, and (2) training a lightweight and robust model on the top of LM, which learns an accurate decision boundary from an extracted noisy dataset. Throughout in-depth investigations on various datasets, we demonstrated that CELDA reaches new state-of-the-art in weakly-supervised text classification and narrows the gap with a fully-supervised model. Additionally, our proposed methodology can be applied universally to any LM and has the potential to scale to larger models, making it a more viable option for utilizing large LMs.

* ACL 2023 
Viaarxiv icon

Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning

Jan 30, 2023
Hyunsoo Cho, Choonghyun Park, Junyeop Kim, Hyuhng Joon Kim, Kang Min Yoo, Sang-goo Lee

Figure 1 for Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning
Figure 2 for Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning
Figure 3 for Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning
Figure 4 for Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning

As the size of the pre-trained language model (PLM) continues to increase, numerous parameter-efficient transfer learning methods have been proposed recently to compensate for the tremendous cost of fine-tuning. Despite the impressive results achieved by large pre-trained language models (PLMs) and various parameter-efficient transfer learning (PETL) methods on sundry benchmarks, it remains unclear if they can handle inputs that have been distributionally shifted effectively. In this study, we systematically explore how the ability to detect out-of-distribution (OOD) changes as the size of the PLM grows or the transfer methods are altered. Specifically, we evaluated various PETL techniques, including fine-tuning, Adapter, LoRA, and prefix-tuning, on three different intention classification tasks, each utilizing various language models with different scales.

* WIP 
Viaarxiv icon

Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners

Dec 28, 2022
Hyunsoo Cho, Hyuhng Joon Kim, Junyeob Kim, Sang-Woo Lee, Sang-goo Lee, Kang Min Yoo, Taeuk Kim

Figure 1 for Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners
Figure 2 for Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners
Figure 3 for Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners
Figure 4 for Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners

Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning. However, the ICL performance does not scale well with the number of available training samples as it is limited by the inherent input length constraint of the underlying language model. Meanwhile, many studies have revealed that language models are also powerful feature extractors, allowing them to be utilized in a black-box manner and enabling the linear probing paradigm, where lightweight discriminators are trained on top of the pre-extracted input representations. This paper proposes prompt-augmented linear probing (PALP), a hybrid of linear probing and ICL, which leverages the best of both worlds. PALP inherits the scalability of linear probing and the capability of enforcing language models to derive more meaningful representations via tailoring input into a more conceivable form. Throughout in-depth investigations on various datasets, we verified that PALP significantly enhances the input representations closing the gap between ICL in the data-hungry scenario and fine-tuning in the data-abundant scenario with little training overhead, potentially making PALP a strong alternative in a black-box scenario.

* AAAI 2023 
Viaarxiv icon

Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator

Jun 16, 2022
Hyuhng Joon Kim, Hyunsoo Cho, Junyeob Kim, Taeuk Kim, Kang Min Yoo, Sang-goo Lee

Figure 1 for Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator
Figure 2 for Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator
Figure 3 for Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator
Figure 4 for Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator

Large-scale pre-trained language models (PLMs) are well-known for being capable of solving a task simply by conditioning a few input-label pairs dubbed demonstrations on a prompt without being explicitly tuned for the desired downstream task. Such a process (i.e., in-context learning), however, naturally leads to high reliance on the demonstrations which are usually selected from external datasets. In this paper, we propose self-generated in-context learning (SG-ICL), which generates demonstrations for in-context learning from PLM itself to minimize the reliance on the external demonstration. We conduct experiments on four different text classification tasks and show SG-ICL significantly outperforms zero-shot learning and is generally worth approximately 0.6 gold training samples. Moreover, our generated demonstrations show more consistent performance with low variance compared to randomly selected demonstrations from the training dataset.

* NAACL 2022 Workshop on Large-scale Pre-trained Language Models 
Viaarxiv icon

Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations

May 25, 2022
Junyeob Kim, Hyuhng Joon Kim, Hyunsoo Cho, Hwiyeol Jo, Sang-Woo Lee, Sang-goo Lee, Kang Min Yoo, Taeuk Kim

Figure 1 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Figure 2 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Figure 3 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Figure 4 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations

Despite recent explosion in research interests, in-context learning and the precise impact of the quality of demonstrations remain elusive. While, based on current literature, it is expected that in-context learning shares a similar mechanism to supervised learning, Min et al. (2022) recently reported that, surprisingly, input-label correspondence is less important than other aspects of prompt demonstrations. Inspired by this counter-intuitive observation, we re-examine the importance of ground truth labels on in-context learning from diverse and statistical points of view. With the aid of the newly introduced metrics, i.e., Ground-truth Label Effect Ratio (GLER), demo-gain, and label sensitivity, we find that the impact of the correct input-label matching can vary according to different configurations. Expanding upon the previous key finding on the role of demonstrations, the complementary and contrastive results suggest that one might need to take more care when estimating the impact of each component in in-context learning demonstrations.

Viaarxiv icon

Exploiting Session Information in BERT-based Session-aware Sequential Recommendation

May 04, 2022
Jinseok Seol, Youngrok Ko, Sang-goo Lee

Figure 1 for Exploiting Session Information in BERT-based Session-aware Sequential Recommendation
Figure 2 for Exploiting Session Information in BERT-based Session-aware Sequential Recommendation
Figure 3 for Exploiting Session Information in BERT-based Session-aware Sequential Recommendation
Figure 4 for Exploiting Session Information in BERT-based Session-aware Sequential Recommendation

In recommendation systems, utilizing the user interaction history as sequential information has resulted in great performance improvement. However, in many online services, user interactions are commonly grouped by sessions that presumably share preferences, which requires a different approach from ordinary sequence representation techniques. To this end, sequence representation models with a hierarchical structure or various viewpoints have been developed but with a rather complex network structure. In this paper, we propose three methods to improve recommendation performance by exploiting session information while minimizing additional parameters in a BERT-based sequential recommendation model: using session tokens, adding session segment embeddings, and a time-aware self-attention. We demonstrate the feasibility of the proposed methods through experiments on widely used recommendation datasets.

* 6 pages, accepted in The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) 2022, short paper 
Viaarxiv icon

Technologies for AI-Driven Fashion Social Networking Service with E-Commerce

Mar 11, 2022
Jinseok Seol, Seongjae Kim, Sungchan Park, Holim Lim, Hyunsoo Na, Eunyoung Park, Dohee Jung, Soyoung Park, Kangwoo Lee, Sang-goo Lee

Figure 1 for Technologies for AI-Driven Fashion Social Networking Service with E-Commerce
Figure 2 for Technologies for AI-Driven Fashion Social Networking Service with E-Commerce
Figure 3 for Technologies for AI-Driven Fashion Social Networking Service with E-Commerce
Figure 4 for Technologies for AI-Driven Fashion Social Networking Service with E-Commerce

The rapid growth of the online fashion market brought demands for innovative fashion services and commerce platforms. With the recent success of deep learning, many applications employ AI technologies such as visual search and recommender systems to provide novel and beneficial services. In this paper, we describe applied technologies for AI-driven fashion social networking service that incorporate fashion e-commerce. In the application, people can share and browse their outfit-of-the-day (OOTD) photos, while AI analyzes them and suggests similar style OOTDs and related products. To this end, we trained deep learning based AI models for fashion and integrated them to build a fashion visual search system and a recommender system for OOTD. With aforementioned technologies, the AI-driven fashion SNS platform, iTOO, has been successfully launched.

* 16 pages, accepted in International Semantic Intelligence Conference (ISIC) 2022, The Applications and Deployment Track 
Viaarxiv icon