Alert button
Picture for Minju Jo

Minju Jo

Alert button

Hawkes Process Based on Controlled Differential Equations

May 18, 2023
Minju Jo, Seungji Kook, Noseong Park

Figure 1 for Hawkes Process Based on Controlled Differential Equations
Figure 2 for Hawkes Process Based on Controlled Differential Equations
Figure 3 for Hawkes Process Based on Controlled Differential Equations
Figure 4 for Hawkes Process Based on Controlled Differential Equations

Hawkes processes are a popular framework to model the occurrence of sequential events, i.e., occurrence dynamics, in several fields such as social diffusion. In real-world scenarios, the inter-arrival time among events is irregular. However, existing neural network-based Hawkes process models not only i) fail to capture such complicated irregular dynamics, but also ii) resort to heuristics to calculate the log-likelihood of events since they are mostly based on neural networks designed for regular discrete inputs. To this end, we present the concept of Hawkes process based on controlled differential equations (HP-CDE), by adopting the neural controlled differential equation (neural CDE) technology which is an analogue to continuous RNNs. Since HP-CDE continuously reads data, i) irregular time-series datasets can be properly treated preserving their uneven temporal spaces, and ii) the log-likelihood can be exactly computed. Moreover, as both Hawkes processes and neural CDEs are first developed to model complicated human behavioral dynamics, neural CDE-based Hawkes processes are successful in modeling such occurrence dynamics. In our experiments with 4 real-world datasets, our method outperforms existing methods by non-trivial margins.

Viaarxiv icon

Hawkes Process based on Controlled Differential Equations

May 09, 2023
Minju Jo, Seungji Kook, Noseong Park

Figure 1 for Hawkes Process based on Controlled Differential Equations
Figure 2 for Hawkes Process based on Controlled Differential Equations
Figure 3 for Hawkes Process based on Controlled Differential Equations
Figure 4 for Hawkes Process based on Controlled Differential Equations

Hawkes processes are a popular framework to model the occurrence of sequential events, i.e., occurrence dynamics, in several fields such as social diffusion. In real-world scenarios, the inter-arrival time among events is irregular. However, existing neural network-based Hawkes process models not only i) fail to capture such complicated irregular dynamics, but also ii) resort to heuristics to calculate the log-likelihood of events since they are mostly based on neural networks designed for regular discrete inputs. To this end, we present the concept of Hawkes process based on controlled differential equations (HP-CDE), by adopting the neural controlled differential equation (neural CDE) technology which is an analogue to continuous RNNs. Since HP-CDE continuously reads data, i) irregular time-series datasets can be properly treated preserving their uneven temporal spaces, and ii) the log-likelihood can be exactly computed. Moreover, as both Hawkes processes and neural CDEs are first developed to model complicated human behavioral dynamics, neural CDE-based Hawkes processes are successful in modeling such occurrence dynamics. In our experiments with 4 real-world datasets, our method outperforms existing methods by non-trivial margins.

Viaarxiv icon

Learnable Path in Neural Controlled Differential Equations

Jan 11, 2023
Sheo Yon Jhin, Minju Jo, Seungji Kook, Noseong Park, Sungpil Woo, Sunhwan Lim

Figure 1 for Learnable Path in Neural Controlled Differential Equations
Figure 2 for Learnable Path in Neural Controlled Differential Equations
Figure 3 for Learnable Path in Neural Controlled Differential Equations
Figure 4 for Learnable Path in Neural Controlled Differential Equations

Neural controlled differential equations (NCDEs), which are continuous analogues to recurrent neural networks (RNNs), are a specialized model in (irregular) time-series processing. In comparison with similar models, e.g., neural ordinary differential equations (NODEs), the key distinctive characteristics of NCDEs are i) the adoption of the continuous path created by an interpolation algorithm from each raw discrete time-series sample and ii) the adoption of the Riemann--Stieltjes integral. It is the continuous path which makes NCDEs be analogues to continuous RNNs. However, NCDEs use existing interpolation algorithms to create the path, which is unclear whether they can create an optimal path. To this end, we present a method to generate another latent path (rather than relying on existing interpolation algorithms), which is identical to learning an appropriate interpolation method. We design an encoder-decoder module based on NCDEs and NODEs, and a special training method for it. Our method shows the best performance in both time-series classification and forecasting.

* Accepted by AAAI 2023 
Viaarxiv icon

TimeKit: A Time-series Forecasting-based Upgrade Kit for Collaborative Filtering

Nov 08, 2022
Seoyoung Hong, Minju Jo, Seungji Kook, Jaeeun Jung, Hyowon Wi, Noseong Park, Sung-Bae Cho

Figure 1 for TimeKit: A Time-series Forecasting-based Upgrade Kit for Collaborative Filtering
Figure 2 for TimeKit: A Time-series Forecasting-based Upgrade Kit for Collaborative Filtering
Figure 3 for TimeKit: A Time-series Forecasting-based Upgrade Kit for Collaborative Filtering
Figure 4 for TimeKit: A Time-series Forecasting-based Upgrade Kit for Collaborative Filtering

Recommender systems are a long-standing research problem in data mining and machine learning. They are incremental in nature, as new user-item interaction logs arrive. In real-world applications, we need to periodically train a collaborative filtering algorithm to extract user/item embedding vectors and therefore, a time-series of embedding vectors can be naturally defined. We present a time-series forecasting-based upgrade kit (TimeKit), which works in the following way: it i) first decides a base collaborative filtering algorithm, ii) extracts user/item embedding vectors with the base algorithm from user-item interaction logs incrementally, e.g., every month, iii) trains our time-series forecasting model with the extracted time-series of embedding vectors, and then iv) forecasts the future embedding vectors and recommend with their dot-product scores owing to a recent breakthrough in processing complicated time-series data, i.e., neural controlled differential equations (NCDEs). Our experiments with four real-world benchmark datasets show that the proposed time-series forecasting-based upgrade kit can significantly enhance existing popular collaborative filtering algorithms.

* Accepted at IEEE BigData 2022 
Viaarxiv icon

LORD: Lower-Dimensional Embedding of Log-Signature in Neural Rough Differential Equations

Apr 19, 2022
Jaehoon Lee, Jinsung Jeon, Sheo yon Jhin, Jihyeon Hyeong, Jayoung Kim, Minju Jo, Kook Seungji, Noseong Park

Figure 1 for LORD: Lower-Dimensional Embedding of Log-Signature in Neural Rough Differential Equations
Figure 2 for LORD: Lower-Dimensional Embedding of Log-Signature in Neural Rough Differential Equations
Figure 3 for LORD: Lower-Dimensional Embedding of Log-Signature in Neural Rough Differential Equations
Figure 4 for LORD: Lower-Dimensional Embedding of Log-Signature in Neural Rough Differential Equations

The problem of processing very long time-series data (e.g., a length of more than 10,000) is a long-standing research problem in machine learning. Recently, one breakthrough, called neural rough differential equations (NRDEs), has been proposed and has shown that it is able to process such data. Their main concept is to use the log-signature transform, which is known to be more efficient than the Fourier transform for irregular long time-series, to convert a very long time-series sample into a relatively shorter series of feature vectors. However, the log-signature transform causes non-trivial spatial overheads. To this end, we present the method of LOweR-Dimensional embedding of log-signature (LORD), where we define an NRDE-based autoencoder to implant the higher-depth log-signature knowledge into the lower-depth log-signature. We show that the encoder successfully combines the higher-depth and the lower-depth log-signature knowledge, which greatly stabilizes the training process and increases the model accuracy. In our experiments with benchmark datasets, the improvement ratio by our method is up to 75\% in terms of various classification and forecasting evaluation metrics.

* main 9 pages 
Viaarxiv icon

EXIT: Extrapolation and Interpolation-based Neural Controlled Differential Equations for Time-series Classification and Forecasting

Apr 19, 2022
Sheo Yon Jhin, Jaehoon Lee, Minju Jo, Seungji Kook, Jinsung Jeon, Jihyeon Hyeong, Jayoung Kim, Noseong Park

Figure 1 for EXIT: Extrapolation and Interpolation-based Neural Controlled Differential Equations for Time-series Classification and Forecasting
Figure 2 for EXIT: Extrapolation and Interpolation-based Neural Controlled Differential Equations for Time-series Classification and Forecasting
Figure 3 for EXIT: Extrapolation and Interpolation-based Neural Controlled Differential Equations for Time-series Classification and Forecasting
Figure 4 for EXIT: Extrapolation and Interpolation-based Neural Controlled Differential Equations for Time-series Classification and Forecasting

Deep learning inspired by differential equations is a recent research trend and has marked the state of the art performance for many machine learning tasks. Among them, time-series modeling with neural controlled differential equations (NCDEs) is considered as a breakthrough. In many cases, NCDE-based models not only provide better accuracy than recurrent neural networks (RNNs) but also make it possible to process irregular time-series. In this work, we enhance NCDEs by redesigning their core part, i.e., generating a continuous path from a discrete time-series input. NCDEs typically use interpolation algorithms to convert discrete time-series samples to continuous paths. However, we propose to i) generate another latent continuous path using an encoder-decoder architecture, which corresponds to the interpolation process of NCDEs, i.e., our neural network-based interpolation vs. the existing explicit interpolation, and ii) exploit the generative characteristic of the decoder, i.e., extrapolation beyond the time domain of original data if needed. Therefore, our NCDE design can use both the interpolated and the extrapolated information for downstream machine learning tasks. In our experiments with 5 real-world datasets and 12 baselines, our extrapolation and interpolation-based NCDEs outperform existing baselines by non-trivial margins.

* main 8 pages 
Viaarxiv icon

LightMove: A Lightweight Next-POI Recommendation for Taxicab Rooftop Advertising

Aug 18, 2021
Jinsung Jeon, Soyoung Kang, Minju Jo, Seunghyeon Cho, Noseong Park, Seonghoon Kim, Chiyoung Song

Figure 1 for LightMove: A Lightweight Next-POI Recommendation for Taxicab Rooftop Advertising
Figure 2 for LightMove: A Lightweight Next-POI Recommendation for Taxicab Rooftop Advertising
Figure 3 for LightMove: A Lightweight Next-POI Recommendation for Taxicab Rooftop Advertising
Figure 4 for LightMove: A Lightweight Next-POI Recommendation for Taxicab Rooftop Advertising

Mobile digital billboards are an effective way to augment brand-awareness. Among various such mobile billboards, taxicab rooftop devices are emerging in the market as a brand new media. Motov is a leading company in South Korea in the taxicab rooftop advertising market. In this work, we present a lightweight yet accurate deep learning-based method to predict taxicabs' next locations to better prepare for targeted advertising based on demographic information of locations. Considering the fact that next POI recommendation datasets are frequently sparse, we design our presented model based on neural ordinary differential equations (NODEs), which are known to be robust to sparse/incorrect input, with several enhancements. Our model, which we call LightMove, has a larger prediction accuracy, a smaller number of parameters, and/or a smaller training/inference time, when evaluating with various datasets, in comparison with state-of-the-art models.

* Accepted in CIKM 2021 
Viaarxiv icon

ACE-NODE: Attentive Co-Evolving Neural Ordinary Differential Equations

May 31, 2021
Sheo Yon Jhin, Minju Jo, Taeyong Kong, Jinsung Jeon, Noseong Park

Figure 1 for ACE-NODE: Attentive Co-Evolving Neural Ordinary Differential Equations
Figure 2 for ACE-NODE: Attentive Co-Evolving Neural Ordinary Differential Equations
Figure 3 for ACE-NODE: Attentive Co-Evolving Neural Ordinary Differential Equations
Figure 4 for ACE-NODE: Attentive Co-Evolving Neural Ordinary Differential Equations

Neural ordinary differential equations (NODEs) presented a new paradigm to construct (continuous-time) neural networks. While showing several good characteristics in terms of the number of parameters and the flexibility in constructing neural networks, they also have a couple of well-known limitations: i) theoretically NODEs learn homeomorphic mapping functions only, and ii) sometimes NODEs show numerical instability in solving integral problems. To handle this, many enhancements have been proposed. To our knowledge, however, integrating attention into NODEs has been overlooked for a while. To this end, we present a novel method of attentive dual co-evolving NODE (ACE-NODE): one main NODE for a downstream machine learning task and the other for providing attention to the main NODE. Our ACE-NODE supports both pairwise and elementwise attention. In our experiments, our method outperforms existing NODE-based and non-NODE-based baselines in almost all cases by non-trivial margins.

* Accepted by KDD 2021 
Viaarxiv icon