Alert button
Picture for Hwiyeol Jo

Hwiyeol Jo

Alert button

Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations

May 25, 2022
Junyeob Kim, Hyuhng Joon Kim, Hyunsoo Cho, Hwiyeol Jo, Sang-Woo Lee, Sang-goo Lee, Kang Min Yoo, Taeuk Kim

Figure 1 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Figure 2 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Figure 3 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Figure 4 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations

Despite recent explosion in research interests, in-context learning and the precise impact of the quality of demonstrations remain elusive. While, based on current literature, it is expected that in-context learning shares a similar mechanism to supervised learning, Min et al. (2022) recently reported that, surprisingly, input-label correspondence is less important than other aspects of prompt demonstrations. Inspired by this counter-intuitive observation, we re-examine the importance of ground truth labels on in-context learning from diverse and statistical points of view. With the aid of the newly introduced metrics, i.e., Ground-truth Label Effect Ratio (GLER), demo-gain, and label sensitivity, we find that the impact of the correct input-label matching can vary according to different configurations. Expanding upon the previous key finding on the role of demonstrations, the complementary and contrastive results suggest that one might need to take more care when estimating the impact of each component in in-context learning demonstrations.

Viaarxiv icon

Human-Like Active Learning: Machines Simulating the Human Learning Process

Nov 07, 2020
Jaeseo Lim, Hwiyeol Jo, Byoung-Tak Zhang, Jooyong Park

Figure 1 for Human-Like Active Learning: Machines Simulating the Human Learning Process
Figure 2 for Human-Like Active Learning: Machines Simulating the Human Learning Process
Figure 3 for Human-Like Active Learning: Machines Simulating the Human Learning Process

Although the use of active learning to increase learners' engagement has recently been introduced in a variety of methods, empirical experiments are lacking. In this study, we attempted to align two experiments in order to (1) make a hypothesis for machine and (2) empirically confirm the effect of active learning on learning. In Experiment 1, we compared the effect of a passive form of learning to active form of learning. The results showed that active learning had a greater learning outcomes than passive learning. In the machine experiment based on the human result, we imitated the human active learning as a form of knowledge distillation. The active learning framework performed better than the passive learning framework. In the end, we showed not only that we can make build better machine training framework through the human experiment result, but also empirically confirm the result of human experiment through imitated machine experiments; human-like active learning have crucial effect on learning performance.

* NeurIPS 2020 Workshop on BabyMind 
Viaarxiv icon

Ruminating Word Representations with Random Noised Masker

Nov 08, 2019
Hwiyeol Jo, Byoung-Tak Zhang

Figure 1 for Ruminating Word Representations with Random Noised Masker
Figure 2 for Ruminating Word Representations with Random Noised Masker
Figure 3 for Ruminating Word Representations with Random Noised Masker
Figure 4 for Ruminating Word Representations with Random Noised Masker

We introduce a training method for both better word representation and performance, which we call GROVER (Gradual Rumination On the Vector with maskERs). The method is to gradually and iteratively add random noises to word embeddings while training a model. GROVER first starts from conventional training process, and then extracts the fine-tuned representations. Next, we gradually add random noises to the word representations and repeat the training process from scratch, but initialize with the noised word representations. Through the re-training process, we can mitigate some noises to be compensated and utilize other noises to learn better representations. As a result, we can get word representations further fine-tuned and specialized on the task. When we experiment with our method on 5 text classification datasets, our method improves model performances on most of the datasets. Moreover, we show that our method can be combined with other regularization techniques, further improving the model performance.

* AAAI ver 
Viaarxiv icon

Delta-training: Simple Semi-Supervised Text Classification using Pretrained Word Embeddings

Jan 22, 2019
Hwiyeol Jo, Ceyda Cinarel

Figure 1 for Delta-training: Simple Semi-Supervised Text Classification using Pretrained Word Embeddings
Figure 2 for Delta-training: Simple Semi-Supervised Text Classification using Pretrained Word Embeddings
Figure 3 for Delta-training: Simple Semi-Supervised Text Classification using Pretrained Word Embeddings
Figure 4 for Delta-training: Simple Semi-Supervised Text Classification using Pretrained Word Embeddings

We propose a novel and simple method for semi-supervised text classification. The method starts from a hypothesis that a classifier with pretrained word embeddings always outperforms the same classifier with randomly initialized word embeddings, as empirically observed in NLP tasks. Our method first builds two sets of classifiers as a form of model ensemble, and then initializes their word embeddings differently: one using random, the other using pretrained word embeddings. We focus on different predictions between the two classifiers on unlabeled data while following the self-training framework. We also introduce label refinement and early-stopping in meta-epoch for better confidence on the label-by-prediction. We experiment on 4 different classification datasets, showing that our method performs better than the method using only the training set. Delta-training also outperforms the conventional self-training method in multi-class classification, showing robust performance against error accumulation.

* Anonymous NAACL2019 Submission Archieving before Public Presentation 
Viaarxiv icon

Deep Extrofitting: Specialization and Generalization of Expansional Retrofitting Word Vectors using Semantic Lexicons

Sep 04, 2018
Hwiyeol Jo

Figure 1 for Deep Extrofitting: Specialization and Generalization of Expansional Retrofitting Word Vectors using Semantic Lexicons
Figure 2 for Deep Extrofitting: Specialization and Generalization of Expansional Retrofitting Word Vectors using Semantic Lexicons
Figure 3 for Deep Extrofitting: Specialization and Generalization of Expansional Retrofitting Word Vectors using Semantic Lexicons
Figure 4 for Deep Extrofitting: Specialization and Generalization of Expansional Retrofitting Word Vectors using Semantic Lexicons

The retrofitting techniques, which inject external resources into word representations, have compensated the weakness of distributed representations in semantic and relational knowledge between words. Implicitly retrofitting word vectors by expansional technique showed that the method outperforms retrofitting in word similarity task with generalization. In this paper, we propose deep extrofitting: in-depth stacking of extrofitting. We first stack extrofitting for word vector generalization. Next, we combine extrofitting with retrofitting, finding new vector space on specialization that prevents retrofitting from converging in a few iterations. When experimenting with GloVe, we show that our methods outperform the previous methods on most of word similarity task while requiring only synonyms as external resources. We also report further analysis on the effect of word vector specialization and word vector generalization in text classification task.

Viaarxiv icon

What we really want to find by Sentiment Analysis: The Relationship between Computational Models and Psychological State

Jun 03, 2018
Hwiyeol Jo, Soo-Min Kim, Jeong Ryu

Figure 1 for What we really want to find by Sentiment Analysis: The Relationship between Computational Models and Psychological State
Figure 2 for What we really want to find by Sentiment Analysis: The Relationship between Computational Models and Psychological State
Figure 3 for What we really want to find by Sentiment Analysis: The Relationship between Computational Models and Psychological State
Figure 4 for What we really want to find by Sentiment Analysis: The Relationship between Computational Models and Psychological State

As the first step to model emotional state of a person, we build sentiment analysis models with existing deep neural network algorithms and compare the models with psychological measurements to enlighten the relationship. In the experiments, we first examined psychological state of 64 participants and asked them to summarize the story of a book, Chronicle of a Death Foretold (Marquez, 1981). Secondly, we trained models using crawled 365,802 movie review data; then we evaluated participants' summaries using the pretrained model as a concept of transfer learning. With the background that emotion affects on memories, we investigated the relationship between the evaluation score of the summaries from computational models and the examined psychological measurements. The result shows that although CNN performed the best among other deep neural network algorithms (LSTM, GRU), its results are not related to the psychological state. Rather, GRU shows more explainable results depending on the psychological state. The contribution of this paper can be summarized as follows: (1) we enlighten the relationship between computational models and psychological measurements. (2) we suggest this framework as objective methods to evaluate the emotion; the real sentiment analysis of a person.

* Paper version of "Psychological State in Text: A Limitation of Sentiment Analysis.". arXiv admin note: text overlap with arXiv:1607.03707 
Viaarxiv icon

Psychological State in Text: A Limitation of Sentiment Analysis

Jun 03, 2018
Hwiyeol Jo, Jeong Ryu

Figure 1 for Psychological State in Text: A Limitation of Sentiment Analysis
Figure 2 for Psychological State in Text: A Limitation of Sentiment Analysis
Figure 3 for Psychological State in Text: A Limitation of Sentiment Analysis

Starting with the idea that sentiment analysis models should be able to predict not only positive or negative but also other psychological states of a person, we implement a sentiment analysis model to investigate the relationship between the model and emotional state. We first examine psychological measurements of 64 participants and ask them to write a book report about a story. After that, we train our sentiment analysis model using crawled movie review data. We finally evaluate participants' writings, using the pretrained model as a concept of transfer learning. The result shows that sentiment analysis model performs good at predicting a score, but the score does not have any correlation with human's self-checked sentiment.

* In Proceedings of IJCAI-ECAI Workshop on AI and Computational Psychology: Theories, Algorithms and Applications (CompPsy) 
Viaarxiv icon

Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons

Jun 03, 2018
Hwiyeol Jo, Stanley Jungkyu Choi

Figure 1 for Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons
Figure 2 for Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons
Figure 3 for Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons
Figure 4 for Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons

We propose post-processing method for enriching not only word representation but also its vector space using semantic lexicons, which we call extrofitting. The method consists of 3 steps as follows: (i) Expanding 1 or more dimension(s) on all the word vectors, filling with their representative value. (ii) Transferring semantic knowledge by averaging each representative values of synonyms and filling them in the expanded dimension(s). These two steps make representations of the synonyms close together. (iii) Projecting the vector space using Linear Discriminant Analysis, which eliminates the expanded dimension(s) with semantic knowledge. When experimenting with GloVe, we find that our method outperforms Faruqui's retrofitting on some of word similarity task. We also report further analysis on our method in respect to word vector dimensions, vocabulary size as well as other well-known pretrained word vectors (e.g., Word2Vec, Fasttext).

* In Proceedings of the 3rd ACL Workshop on Representation Learning for NLP (RepL4NLP) 
Viaarxiv icon

Re-presenting a Story by Emotional Factors using Sentimental Analysis Method

Jul 13, 2016
Hwiyeol Jo, Yohan Moon, Jong In Kim, Jeong Ryu

Figure 1 for Re-presenting a Story by Emotional Factors using Sentimental Analysis Method
Figure 2 for Re-presenting a Story by Emotional Factors using Sentimental Analysis Method
Figure 3 for Re-presenting a Story by Emotional Factors using Sentimental Analysis Method
Figure 4 for Re-presenting a Story by Emotional Factors using Sentimental Analysis Method

Remembering an event is affected by personal emotional status. We examined the psychological status and personal factors; depression (Center for Epidemiological Studies - Depression, Radloff, 1977), present affective (Positive Affective and Negative Affective Schedule, Watson et al., 1988), life orient (Life Orient Test, Scheier & Carver, 1985), self-awareness (Core Self Evaluation Scale, Judge et al., 2003), and social factor (Social Support, Sarason et al., 1983) of undergraduate students (N=64) and got summaries of a story, Chronicle of a Death Foretold (Gabriel Garcia Marquez, 1981) from them. We implement a sentimental analysis model based on convolutional neural network (LeCun & Bengio, 1995) to evaluate each summary. From the same vein used for transfer learning (Pan & Yang, 2010), we collected 38,265 movie review data to train the model and then use them to score summaries of each student. The results of CES-D and PANAS show the relationship between emotion and memory retrieval as follows: depressed people have shown a tendency of representing a story more negatively, and they seemed less expressive. People with full of emotion - high in PANAS - have retrieved their memory more expressively than others, using more negative words then others. The contributions of this study can be summarized as follows: First, lightening the relationship between emotion and its effect during times of storing or retrieving a memory. Second, suggesting objective methods to evaluate the intensity of emotion in natural language format, using a sentimental analysis model.

* Paper version of CogSci2016; We should correct poor English 
Viaarxiv icon