Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"magic": models, code, and papers

A Tutorial on Principal Component Analysis

Apr 03, 2014
Jonathon Shlens

Principal component analysis (PCA) is a mainstay of modern data analysis - a black box that is widely used but (sometimes) poorly understood. The goal of this paper is to dispel the magic behind this black box. This manuscript focuses on building a solid intuition for how and why principal component analysis works. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA. This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique.

  
Access Paper or Ask Questions

Exploring Classic Quantitative Strategies

Feb 23, 2022
Jun Lu

The goal of this paper is to debunk and dispel the magic behind the black-box quantitative strategies. It aims to build a solid foundation on how and why the techniques work. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind the strategies. This tutorial doesn't shy away from addressing both the formal and informal aspects of quantitative strategies. By doing so, it hopes to provide readers with a deeper understanding of these techniques as well as the when, the how and the why of applying these techniques. The strategies are presented in terms of both S\&P500 and SH510300 data sets. However, the results from the tests are just examples of how the methods work; no claim is made on the suggestion of real market positions.

  
Access Paper or Ask Questions

Tractable Query Answering and Optimization for Extensions of Weakly-Sticky Datalog+-

Apr 13, 2015
Mostafa Milani, Leopoldo Bertossi

We consider a semantic class, weakly-chase-sticky (WChS), and a syntactic subclass, jointly-weakly-sticky (JWS), of Datalog+- programs. Both extend that of weakly-sticky (WS) programs, which appear in our applications to data quality. For WChS programs we propose a practical, polynomial-time query answering algorithm (QAA). We establish that the two classes are closed under magic-sets rewritings. As a consequence, QAA can be applied to the optimized programs. QAA takes as inputs the program (including the query) and semantic information about the "finiteness" of predicate positions. For the syntactic subclasses JWS and WS of WChS, this additional information is computable.

* To appear in Proc. Alberto Mendelzon WS on Foundations of Data Management (AMW15) 
  
Access Paper or Ask Questions

Comparative Study of View Update Algorithms in Rational Choice Theory

Nov 10, 2014
Radhakrishnan Delhibabu

The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. We show that knowledge base dynamics has interesting connection with kernel change via hitting set and abduction. The approach extends and integrates standard techniques for efficient query answering and integrity checking. The generation of hitting set is carried out through a hyper tableaux calculus and magic set that is focused on the goal of minimality. Many different view update algorithms have been proposed in the literature to address this problem. The present paper provides a comparative study of view update algorithms in rational approach.

* http://link.springer.com/article/10.1007/s10489-014-0580-7. arXiv admin note: substantial text overlap with arXiv:1407.3512, arXiv:1301.5154 
  
Access Paper or Ask Questions

Semantic Change and Emerging Tropes In a Large Corpus of New High German Poetry

Sep 26, 2019
Thomas Haider, Steffen Eger

Due to its semantic succinctness and novelty of expression, poetry is a great test bed for semantic change analysis. However, so far there is a scarcity of large diachronic corpora. Here, we provide a large corpus of German poetry which consists of about 75k poems with more than 11 million tokens, with poems ranging from the 16th to early 20th century. We then track semantic change in this corpus by investigating the rise of tropes (`love is magic') over time and detecting change points of meaning, which we find to occur particularly within the German Romantic period. Additionally, through self-similarity, we reconstruct literary periods and find evidence that the law of linear semantic change also applies to poetry.

* Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change (pp. 216-222). At ACL 2019, Florence. https://www.aclweb.org/anthology/W19-4727 
* Historical Language Change Workshop at ACL 2019, Florence 
  
Access Paper or Ask Questions

A Comparison of Contextual and Non-Contextual Preference Ranking for Set Addition Problems

Jul 09, 2021
Timo Bertram, Johannes Fürnkranz, Martin Müller

In this paper, we study the problem of evaluating the addition of elements to a set. This problem is difficult, because it can, in the general case, not be reduced to unconditional preferences between the choices. Therefore, we model preferences based on the context of the decision. We discuss and compare two different Siamese network architectures for this task: a twin network that compares the two sets resulting after the addition, and a triplet network that models the contribution of each candidate to the existing set. We evaluate the two settings on a real-world task; learning human card preferences for deck building in the collectible card game Magic: The Gathering. We show that the triplet approach achieves a better result than the twin network and that both outperform previous results on this task.

* SubSetML: Subset Selection in Machine Learning: From Theory to Practice @ ICML 2021 
* arXiv admin note: substantial text overlap with arXiv:2105.11864 
  
Access Paper or Ask Questions

Constraint solvers: An empirical evaluation of design decisions

Jan 31, 2010
Lars Kotthoff

This paper presents an evaluation of the design decisions made in four state-of-the-art constraint solvers; Choco, ECLiPSe, Gecode, and Minion. To assess the impact of design decisions, instances of the five problem classes n-Queens, Golomb Ruler, Magic Square, Social Golfers, and Balanced Incomplete Block Design are modelled and solved with each solver. The results of the experiments are not meant to give an indication of the performance of a solver, but rather investigate what influence the choice of algorithms and data structures has. The analysis of the impact of the design decisions focuses on the different ways of memory management, behaviour with increasing problem size, and specialised algorithms for specific types of variables. It also briefly considers other, less significant decisions.

  
Access Paper or Ask Questions

Deep RL Agent for a Real-Time Action Strategy Game

Feb 15, 2020
Michal Warchalski, Dimitrije Radojevic, Milos Milosevic

We introduce a reinforcement learning environment based on Heroic - Magic Duel, a 1 v 1 action strategy game. This domain is non-trivial for several reasons: it is a real-time game, the state space is large, the information given to the player before and at each step of a match is imperfect, and distribution of actions is dynamic. Our main contribution is a deep reinforcement learning agent playing the game at a competitive level that we trained using PPO and self-play with multiple competing agents, employing only a simple reward of $\pm 1$ depending on the outcome of a single match. Our best self-play agent, obtains around $65\%$ win rate against the existing AI and over $50\%$ win rate against a top human player.

  
Access Paper or Ask Questions

Latent Predictor Networks for Code Generation

Jun 08, 2016
Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom

Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. On these, and a third preexisting corpus, we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks.

  
Access Paper or Ask Questions

Grounding Bound Founded Answer Set Programs

May 14, 2014
Rehan Abdul Aziz, Geoffrey Chu, Peter James Stuckey

To appear in Theory and Practice of Logic Programming (TPLP) Bound Founded Answer Set Programming (BFASP) is an extension of Answer Set Programming (ASP) that extends stable model semantics to numeric variables. While the theory of BFASP is defined on ground rules, in practice BFASP programs are written as complex non-ground expressions. Flattening of BFASP is a technique used to simplify arbitrary expressions of the language to a small and well defined set of primitive expressions. In this paper, we first show how we can flatten arbitrary BFASP rule expressions, to give equivalent BFASP programs. Next, we extend the bottom-up grounding technique and magic set transformation used by ASP to BFASP programs. Our implementation shows that for BFASP problems, these techniques can significantly reduce the ground program size, and improve subsequent solving.

  
Access Paper or Ask Questions
<<
1
2
3
4
5
6
7
8
9
>>