Alert button

"Time Series Analysis": models, code, and papers
Alert button

CCMI : Classifier based Conditional Mutual Information Estimation

Jun 05, 2019
Sudipto Mukherjee, Himanshu Asnani, Sreeram Kannan

Figure 1 for CCMI : Classifier based Conditional Mutual Information Estimation
Figure 2 for CCMI : Classifier based Conditional Mutual Information Estimation
Figure 3 for CCMI : Classifier based Conditional Mutual Information Estimation
Figure 4 for CCMI : Classifier based Conditional Mutual Information Estimation

Conditional Mutual Information (CMI) is a measure of conditional dependence between random variables X and Y, given another random variable Z. It can be used to quantify conditional dependence among variables in many data-driven inference problems such as graphical models, causal learning, feature selection and time-series analysis. While k-nearest neighbor (kNN) based estimators as well as kernel-based methods have been widely used for CMI estimation, they suffer severely from the curse of dimensionality. In this paper, we leverage advances in classifiers and generative models to design methods for CMI estimation. Specifically, we introduce an estimator for KL-Divergence based on the likelihood ratio by training a classifier to distinguish the observed joint distribution from the product distribution. We then show how to construct several CMI estimators using this basic divergence estimator by drawing ideas from conditional generative models. We demonstrate that the estimates from our proposed approaches do not degrade in performance with increasing dimension and obtain significant improvement over the widely used KSG estimator. Finally, as an application of accurate CMI estimation, we use our best estimator for conditional independence testing and achieve superior performance than the state-of-the-art tester on both simulated and real data-sets.

* Mutual Information and Conditional Mutual Information estimation; Conditional Independence Testing; Classifier two-sample likelihood ratio 
Viaarxiv icon

Mind Your POV: Convergence of Articles and Editors Towards Wikipedia's Neutrality Norm

Sep 18, 2018
Umashanthi Pavalanathan, Xiaochuang Han, Jacob Eisenstein

Figure 1 for Mind Your POV: Convergence of Articles and Editors Towards Wikipedia's Neutrality Norm
Figure 2 for Mind Your POV: Convergence of Articles and Editors Towards Wikipedia's Neutrality Norm
Figure 3 for Mind Your POV: Convergence of Articles and Editors Towards Wikipedia's Neutrality Norm
Figure 4 for Mind Your POV: Convergence of Articles and Editors Towards Wikipedia's Neutrality Norm

Wikipedia has a strong norm of writing in a 'neutral point of view' (NPOV). Articles that violate this norm are tagged, and editors are encouraged to make corrections. But the impact of this tagging system has not been quantitatively measured. Does NPOV tagging help articles to converge to the desired style? Do NPOV corrections encourage editors to adopt this style? We study these questions using a corpus of NPOV-tagged articles and a set of lexicons associated with biased language. An interrupted time series analysis shows that after an article is tagged for NPOV, there is a significant decrease in biased language in the article, as measured by several lexicons. However, for individual editors, NPOV corrections and talk page discussions yield no significant change in the usage of words in most of these lexicons, including Wikipedia's own list of 'words to watch.' This suggests that NPOV tagging and discussion does improve content, but has less success enculturating editors to the site's linguistic norms.

* Umashanthi Pavalanathan, Xiaochuang Han, and Jacob Eisenstein. 2018. Mind Your POV: Convergence of Articles and Editors Towards Wikipedia's Neutrality Norm. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 137 (November 2018)  
* ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW), 2018 
Viaarxiv icon

Highly comparative fetal heart rate analysis

Dec 03, 2014
B. D. Fulcher, A. E. Georgieva, C. W. G. Redman, Nick S. Jones

Figure 1 for Highly comparative fetal heart rate analysis
Figure 2 for Highly comparative fetal heart rate analysis
Figure 3 for Highly comparative fetal heart rate analysis
Figure 4 for Highly comparative fetal heart rate analysis

A database of fetal heart rate (FHR) time series measured from 7221 patients during labor is analyzed with the aim of learning the types of features of these recordings that are informative of low cord pH. Our 'highly comparative' analysis involves extracting over 9000 time-series analysis features from each FHR time series, including measures of autocorrelation, entropy, distribution, and various model fits. This diverse collection of features was developed in previous work, and is publicly available. We describe five features that most accurately classify a balanced training set of 59 'low pH' and 59 'normal pH' FHR recordings. We then describe five of the features with the strongest linear correlation to cord pH across the full dataset of FHR time series. The features identified in this work may be used as part of a system for guiding intervention during labor in future. This work successfully demonstrates the utility of comparing across a large, interdisciplinary literature on time-series analysis to automatically contribute new scientific results for specific biomedical signal processing challenges.

* Fulcher, B. D., Georgieva, A., Redman, C. W., & Jones, N. S. (2012). Highly comparative fetal heart rate analysis (pp. 3135-3138). Presented at the 34th Annual International Conference of the IEEE EMBS, San Diego, CA, USA  
* 7 pages, 4 figures 
Viaarxiv icon

Change of human mobility during COVID-19: A United States case study

Sep 18, 2021
Justin Elarde, Joon-Seok Kim, Hamdi Kavak, Andreas Züfle, Taylor Anderson

Figure 1 for Change of human mobility during COVID-19: A United States case study
Figure 2 for Change of human mobility during COVID-19: A United States case study
Figure 3 for Change of human mobility during COVID-19: A United States case study
Figure 4 for Change of human mobility during COVID-19: A United States case study

With the onset of COVID-19 and the resulting shelter in place guidelines combined with remote working practices, human mobility in 2020 has been dramatically impacted. Existing studies typically examine whether mobility in specific localities increases or decreases at specific points in time and relate these changes to certain pandemic and policy events. In this paper, we study mobility change in the US through a five-step process using mobility footprint data. (Step 1) Propose the delta Time Spent in Public Places (Delta-TSPP) as a measure to quantify daily changes in mobility for each US county from 2019-2020. (Step 2) Conduct Principal Component Analysis (PCA) to reduce the Delta-TSPP time series of each county to lower-dimensional latent components of change in mobility. (Step 3) Conduct clustering analysis to find counties that exhibit similar latent components. (Step 4) Investigate local and global spatial autocorrelation for each component. (Step 5) Conduct correlation analysis to investigate how various population characteristics and behavior correlate with mobility patterns. Results show that by describing each county as a linear combination of the three latent components, we can explain 59% of the variation in mobility trends across all US counties. Specifically, change in mobility in 2020 for US counties can be explained as a combination of three latent components: 1) long-term reduction in mobility, 2) no change in mobility, and 3) short-term reduction in mobility. We observe significant correlations between the three latent components of mobility change and various population characteristics, including political leaning, population, COVID-19 cases and deaths, and unemployment. We find that our analysis provides a comprehensive understanding of mobility change in response to the COVID-19 pandemic.

* Current under review at PLOS One 
Viaarxiv icon

Discovering Relational Covariance Structures for Explaining Multiple Time Series

Jul 04, 2018
Anh Tong, Jaesik Choi

Figure 1 for Discovering Relational Covariance Structures for Explaining Multiple Time Series
Figure 2 for Discovering Relational Covariance Structures for Explaining Multiple Time Series
Figure 3 for Discovering Relational Covariance Structures for Explaining Multiple Time Series
Figure 4 for Discovering Relational Covariance Structures for Explaining Multiple Time Series

Analyzing time series data is important to predict future events and changes in finance, manufacturing, and administrative decisions. In time series analysis, Gaussian Process (GP) regression methods recently demonstrate competitive performance by decomposing temporal covariance structures. The covariance structure decomposition allows exploiting shared parameters over a set of multiple, selected time series. In this paper, we present two novel GP models which naturally handle multiple time series by placing an Indian Buffet Process (IBP) prior on the presence of shared kernels. We also investigate the well-definedness of the models when infinite latent components are introduced. We present a pragmatic search algorithm which explores a larger structure space efficiently than the existing search algorithm. Experiments are conducted on both synthetic data sets and real-world data sets, showing improved results in term of structure discoveries and predictive performances. We further provide a promising application generating comparison reports from our model results.

Viaarxiv icon

Making Good on LSTMs Unfulfilled Promise

Nov 23, 2019
Daniel Philps, Artur d'Avila Garcez, Tillman Weyde

Figure 1 for Making Good on LSTMs Unfulfilled Promise
Figure 2 for Making Good on LSTMs Unfulfilled Promise
Figure 3 for Making Good on LSTMs Unfulfilled Promise

LSTMs promise much to financial time-series analysis, temporal and cross-sectional inference, but we find they do not deliver in a real-world financial management task. We examine an alternative called Continual Learning (CL), a memory-augmented approach, which can provide transparent explanations; which memory did what and when. This work has implications for many financial applications including to credit, time-varying fairness in decision making and more. We make three important new observations. Firstly, as well as being more explainable, time-series CL approaches outperform LSTM and a simple sliding window learner (feed-forward neural net (FFNN)). Secondly, we show that CL based on a sliding window learner (FFNN) is more effective than CL based on a sequential learner (LSTM). Thirdly, we examine how real-world, time-series noise impacts several similarity approaches used in CL memory addressing. We provide these insights using an approach called Continual Learning Augmentation (CLA) tested on a complex real world problem; emerging market equities investment decision making. CLA provides a test-bed as it can be based on different types of time-series learner, allowing testing of LSTM and sliding window (FFNN) learners side by side. CLA is also used to test several distance approaches used in a memory recall-gate: euclidean distance (ED), dynamic time warping (DTW), auto-encoder (AE) and a novel hybrid approach, warp-AE. We find CLA out-performs simple LSTM and FFNN learners and CLA based on a sliding window (CLA-FFNN) out-performs a LSTM (CLA-LSTM) implementation. While for memory-addressing, ED under-performs DTW and AE but warp-AE shows the best overall performance in a real-world financial task.

* 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. arXiv admin note: text overlap with arXiv:1812.02340 
Viaarxiv icon

Griffon: Reasoning about Job Anomalies with Unlabeled Data in Cloud-based Platforms

Aug 23, 2019
Liqun Shao, Yiwen Zhu, Abhiram Eswaran, Kristin Lieber, Janhavi Mahajan, Minsoo Thigpen, Sudhir Darbha, Siqi Liu, Subru Krishnan, Soundar Srinivasan, Carlo Curino, Konstantinos Karanasos

Figure 1 for Griffon: Reasoning about Job Anomalies with Unlabeled Data in Cloud-based Platforms
Figure 2 for Griffon: Reasoning about Job Anomalies with Unlabeled Data in Cloud-based Platforms
Figure 3 for Griffon: Reasoning about Job Anomalies with Unlabeled Data in Cloud-based Platforms
Figure 4 for Griffon: Reasoning about Job Anomalies with Unlabeled Data in Cloud-based Platforms

Microsoft's internal big data analytics platform is comprised of hundreds of thousands of machines, serving over half a million jobs daily, from thousands of users. The majority of these jobs are recurring and are crucial for the company's operation. Although administrators spend significant effort tuning system performance, some jobs inevitably experience slowdowns, i.e., their execution time degrades over previous runs. Currently, the investigation of such slowdowns is a labor-intensive and error-prone process, which costs Microsoft significant human and machine resources, and negatively impacts several lines of businesses. In this work, we present Griffin, a system we built and have deployed in production last year to automatically discover the root cause of job slowdowns. Existing solutions either rely on labeled data (i.e., resolved incidents with labeled reasons for job slowdowns), which is in most cases non-existent or non-trivial to acquire, or on time-series analysis of individual metrics that do not target specific jobs holistically. In contrast, in Griffin we cast the problem to a corresponding regression one that predicts the runtime of a job, and show how the relative contributions of the features used to train our interpretable model can be exploited to rank the potential causes of job slowdowns. Evaluated over historical incidents, we show that Griffin discovers slowdown causes that are consistent with the ones validated by domain-expert engineers, in a fraction of the time required by them.

Viaarxiv icon

Online Graph Topology Learning from Matrix-valued Time Series

Jul 16, 2021
Yiye Jiang, Jérémie Bigot, Sofian Maabout

Figure 1 for Online Graph Topology Learning from Matrix-valued Time Series
Figure 2 for Online Graph Topology Learning from Matrix-valued Time Series
Figure 3 for Online Graph Topology Learning from Matrix-valued Time Series
Figure 4 for Online Graph Topology Learning from Matrix-valued Time Series

This paper is concerned with the statistical analysis of matrix-valued time series. These are data collected over a network of sensors (typically a set of spatial locations), recording, over time, observations of multiple measurements. From such data, we propose to learn, in an online fashion, a graph that captures two aspects of dependency: one describing the sparse spatial relationship between sensors, and the other characterizing the measurement relationship. To this purpose, we introduce a novel multivariate autoregressive model to infer the graph topology encoded in the coefficient matrix which captures the sparse Granger causality dependency structure present in such matrix-valued time series. We decompose the graph by imposing a Kronecker sum structure on the coefficient matrix. We develop two online approaches to learn the graph in a recursive way. The first one uses Wald test for the projected OLS estimation, where we derive the asymptotic distribution for the estimator. For the second one, we formalize a Lasso-type optimization problem. We rely on homotopy algorithms to derive updating rules for estimating the coefficient matrix. Furthermore, we provide an adaptive tuning procedure for the regularization parameter. Numerical experiments using both synthetic and real data, are performed to support the effectiveness of the proposed learning approaches.

Viaarxiv icon

Nonlinear Time Series Modeling: A Unified Perspective, Algorithm, and Application

Dec 24, 2017
Subhadeep Mukhopadhyay, Emanuel Parzen

Figure 1 for Nonlinear Time Series Modeling: A Unified Perspective, Algorithm, and Application
Figure 2 for Nonlinear Time Series Modeling: A Unified Perspective, Algorithm, and Application
Figure 3 for Nonlinear Time Series Modeling: A Unified Perspective, Algorithm, and Application
Figure 4 for Nonlinear Time Series Modeling: A Unified Perspective, Algorithm, and Application

A new comprehensive approach to nonlinear time series analysis and modeling is developed in the present paper. We introduce novel data-specific mid-distribution based Legendre Polynomial (LP) like nonlinear transformations of the original time series Y(t) that enables us to adapt all the existing stationary linear Gaussian time series modeling strategy and made it applicable for non-Gaussian and nonlinear processes in a robust fashion. The emphasis of the present paper is on empirical time series modeling via the algorithm LPTime. We demonstrate the effectiveness of our theoretical framework using daily S&P 500 return data between Jan/2/1963 - Dec/31/2009. Our proposed LPTime algorithm systematically discovers all the `stylized facts' of the financial time series automatically all at once, which were previously noted by many researchers one at a time.

* Major restructuring has been done 
Viaarxiv icon