Alert button
Picture for Hanock Kwak

Hanock Kwak

Alert button

Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning

Dec 13, 2022
Kyuyong Shin, Hanock Kwak, Wonjae Kim, Jisu Jeong, Seungjae Jung, Kyung-Min Kim, Jung-Woo Ha, Sang-Woo Lee

Figure 1 for Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning
Figure 2 for Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning
Figure 3 for Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning
Figure 4 for Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning

Recent studies have proposed unified user modeling frameworks that leverage user behavior data from various applications. Many of them benefit from utilizing users' behavior sequences as plain texts, representing rich information in any domain or system without losing generality. Hence, a question arises: Can language modeling for user history corpus help improve recommender systems? While its versatile usability has been widely investigated in many domains, its applications to recommender systems still remain underexplored. We show that language modeling applied directly to task-specific user histories achieves excellent results on diverse recommendation tasks. Also, leveraging additional task-agnostic user histories delivers significant performance benefits. We further demonstrate that our approach can provide promising transfer learning capabilities for a broad spectrum of real-world recommender systems, even on unseen domains and services.

* 14 pages, 5 figures, 9 tables 
Viaarxiv icon

Scaling Law for Recommendation Models: Towards General-purpose User Representations

Dec 01, 2021
Kyuyong Shin, Hanock Kwak, Kyung-Min Kim, Su Young Kim, Max Nihlen Ramstrom, Jisu Jeong

Figure 1 for Scaling Law for Recommendation Models: Towards General-purpose User Representations
Figure 2 for Scaling Law for Recommendation Models: Towards General-purpose User Representations
Figure 3 for Scaling Law for Recommendation Models: Towards General-purpose User Representations
Figure 4 for Scaling Law for Recommendation Models: Towards General-purpose User Representations

A recent trend shows that a general class of models, e.g., BERT, GPT-3, CLIP, trained on broad data at scale have shown a great variety of functionalities with a single learning architecture. In this work, we explore the possibility of general-purpose user representation learning by training a universal user encoder at large scales. We demonstrate that the scaling law holds in the user modeling areas, where the training error scales as a power-law with the amount of compute. Our Contrastive Learning User Encoder (CLUE), optimizes task-agnostic objectives, and the resulting user embeddings stretches our expectation of what is possible to do in various downstream tasks. CLUE also shows great transferability to other domains and systems, as performances on an online experiment shows significant improvements in online Click-Through-Rate (CTR). Furthermore, we also investigate how the performance changes according to the scale-up factors, i.e., model capacity, sequence length and batch size. Finally, we discuss the broader impacts of CLUE in general.

* 11 pages, 6 figures, 5 tables 
Viaarxiv icon

Global-Local Item Embedding for Temporal Set Prediction

Sep 05, 2021
Seungjae Jung, Young-Jin Park, Jisu Jeong, Kyung-Min Kim, Hiun Kim, Minkyu Kim, Hanock Kwak

Figure 1 for Global-Local Item Embedding for Temporal Set Prediction
Figure 2 for Global-Local Item Embedding for Temporal Set Prediction
Figure 3 for Global-Local Item Embedding for Temporal Set Prediction
Figure 4 for Global-Local Item Embedding for Temporal Set Prediction

Temporal set prediction is becoming increasingly important as many companies employ recommender systems in their online businesses, e.g., personalized purchase prediction of shopping baskets. While most previous techniques have focused on leveraging a user's history, the study of combining it with others' histories remains untapped potential. This paper proposes Global-Local Item Embedding (GLOIE) that learns to utilize the temporal properties of sets across whole users as well as within a user by coining the names as global and local information to distinguish the two temporal patterns. GLOIE uses Variational Autoencoder (VAE) and dynamic graph-based model to capture global and local information and then applies attention to integrate resulting item embeddings. Additionally, we propose to use Tweedie output for the decoder of VAE as it can easily model zero-inflated and long-tailed distribution, which is more suitable for several real-world data distributions than Gaussian or multinomial counterparts. When evaluated on three public benchmarks, our algorithm consistently outperforms previous state-of-the-art methods in most ranking metrics.

* 8 pages, 3 figures. To appear in RecSys 2021 LBR 
Viaarxiv icon

One4all User Representation for Recommender Systems in E-commerce

May 24, 2021
Kyuyong Shin, Hanock Kwak, Kyung-Min Kim, Minkyu Kim, Young-Jin Park, Jisu Jeong, Seungjae Jung

Figure 1 for One4all User Representation for Recommender Systems in E-commerce
Figure 2 for One4all User Representation for Recommender Systems in E-commerce
Figure 3 for One4all User Representation for Recommender Systems in E-commerce
Figure 4 for One4all User Representation for Recommender Systems in E-commerce

General-purpose representation learning through large-scale pre-training has shown promising results in the various machine learning fields. For an e-commerce domain, the objective of general-purpose, i.e., one for all, representations would be efficient applications for extensive downstream tasks such as user profiling, targeting, and recommendation tasks. In this paper, we systematically compare the generalizability of two learning strategies, i.e., transfer learning through the proposed model, ShopperBERT, vs. learning from scratch. ShopperBERT learns nine pretext tasks with 79.2M parameters from 0.8B user behaviors collected over two years to produce user embeddings. As a result, the MLPs that employ our embedding method outperform more complex models trained from scratch for five out of six tasks. Specifically, the pre-trained embeddings have superiority over the task-specific supervised features and the strong baselines, which learn the auxiliary dataset for the cold-start problem. We also show the computational efficiency and embedding visualization of the pre-trained features.

Viaarxiv icon

A Worrying Analysis of Probabilistic Time-series Models for Sales Forecasting

Nov 21, 2020
Seungjae Jung, Kyung-Min Kim, Hanock Kwak, Young-Jin Park

Figure 1 for A Worrying Analysis of Probabilistic Time-series Models for Sales Forecasting
Figure 2 for A Worrying Analysis of Probabilistic Time-series Models for Sales Forecasting
Figure 3 for A Worrying Analysis of Probabilistic Time-series Models for Sales Forecasting
Figure 4 for A Worrying Analysis of Probabilistic Time-series Models for Sales Forecasting

Probabilistic time-series models become popular in the forecasting field as they help to make optimal decisions under uncertainty. Despite the growing interest, a lack of thorough analysis hinders choosing what is worth applying for the desired task. In this paper, we analyze the performance of three prominent probabilistic time-series models for sales forecasting. To remove the role of random chance in architecture's performance, we make two experimental principles; 1) Large-scale dataset with various cross-validation sets. 2) A standardized training and hyperparameter selection. The experimental results show that a simple Multi-layer Perceptron and Linear Regression outperform the probabilistic models on RMSE without any feature engineering. Overall, the probabilistic models fail to achieve better performance on point estimation, such as RMSE and MAPE, than comparably simple baselines. We analyze and discuss the performances of probabilistic time-series models.

* NeurIPS 2020 workshop (I Can't Believe It's Not Better, ICBINB@NeurIPS 2020). All authors contributed equally to this research 
Viaarxiv icon

Tripartite Heterogeneous Graph Propagation for Large-scale Social Recommendation

Jul 24, 2019
Kyung-Min Kim, Donghyun Kwak, Hanock Kwak, Young-Jin Park, Sangkwon Sim, Jae-Han Cho, Minkyu Kim, Jihun Kwon, Nako Sung, Jung-Woo Ha

Figure 1 for Tripartite Heterogeneous Graph Propagation for Large-scale Social Recommendation
Figure 2 for Tripartite Heterogeneous Graph Propagation for Large-scale Social Recommendation
Figure 3 for Tripartite Heterogeneous Graph Propagation for Large-scale Social Recommendation
Figure 4 for Tripartite Heterogeneous Graph Propagation for Large-scale Social Recommendation

Graph Neural Networks (GNNs) have been emerging as a promising method for relational representation including recommender systems. However, various challenging issues of social graphs hinder the practical usage of GNNs for social recommendation, such as their complex noisy connections and high heterogeneity. The oversmoothing of GNNs is an obstacle of GNN-based social recommendation as well. Here we propose a new graph embedding method Heterogeneous Graph Propagation (HGP) to tackle these issues. HGP uses a group-user-item tripartite graph as input to reduce the number of edges and the complexity of paths in a social graph. To solve the oversmoothing issue, HGP embeds nodes under a personalized PageRank based propagation scheme, separately for group-user graph and user-item graph. Node embeddings from each graph are integrated using an attention mechanism. We evaluate our HGP on a large-scale real-world dataset consisting of 1,645,279 nodes and 4,711,208 edges. The experimental results show that HGP outperforms several baselines in terms of AUC and F1-score metrics.

* 6 pages, accepted for RecSys 2019 LBR Track 
Viaarxiv icon

Generating Images Part by Part with Composite Generative Adversarial Networks

Nov 14, 2016
Hanock Kwak, Byoung-Tak Zhang

Figure 1 for Generating Images Part by Part with Composite Generative Adversarial Networks
Figure 2 for Generating Images Part by Part with Composite Generative Adversarial Networks
Figure 3 for Generating Images Part by Part with Composite Generative Adversarial Networks

Image generation remains a fundamental problem in artificial intelligence in general and deep learning in specific. The generative adversarial network (GAN) was successful in generating high quality samples of natural images. We propose a model called composite generative adversarial network, that reveals the complex structure of images with multiple generators in which each generator generates some part of the image. Those parts are combined by alpha blending process to create a new single image. It can generate, for example, background and face sequentially with two generators, after training on face dataset. Training was done in an unsupervised way without any labels about what each generator should generate. We found possibilities of learning the structure by using this generative model empirically.

Viaarxiv icon

Ways of Conditioning Generative Adversarial Networks

Nov 04, 2016
Hanock Kwak, Byoung-Tak Zhang

Figure 1 for Ways of Conditioning Generative Adversarial Networks
Figure 2 for Ways of Conditioning Generative Adversarial Networks
Figure 3 for Ways of Conditioning Generative Adversarial Networks

The GANs are generative models whose random samples realistically reflect natural images. It also can generate samples with specific attributes by concatenating a condition vector into the input, yet research on this field is not well studied. We propose novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10. We mainly introduce two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms bilinear features derived from the spatial cross product of an image and a condition vector. These methods significantly enhance log-likelihood of test data under the conditional distributions compared to the methods of concatenation.

Viaarxiv icon