Get our free extension to see links to code for papers anywhere online!

 Add to Chrome

 Add to Firefox

CatalyzeX Code Finder - Browser extension linking code for ML papers across the web! | Product Hunt Embed

Models, code, and papers for "magic"

Magic: the Gathering is as Hard as Arithmetic

Mar 11, 2020
Stella Biderman

Magic: the Gathering is a popular and famously complicated card game about magical combat. Recently, several authors including Chatterjee and Ibsen-Jensen (2016) and Churchill, Biderman, and Herrick (2019) have investigated the computational complexity of playing Magic optimally. In this paper we show that the ``mate-in-$n$'' problem for Magic is $\Delta^0_n$-hard and that optimal play in two-player Magic is non-arithmetic in general. These results apply to how real Magic is played, can be achieved using standard-size tournament legal decks, and do not rely on stochasticity or hidden information. Our paper builds upon the construction that Churchill, Biderman, and Herrick (2019) used to show that this problem was at least as hard as the halting problem.

* pre-print, currently under review 

  Access Model/Code and Paper
Playing magic tricks to deep neural networks untangles human deception

Aug 20, 2019
Regina Zaghi-Lara, Miguel Ángel Gea, Jordi Camí, Luis M. Martínez, Alex Gomez-Marin

Magic is the art of producing in the spectator an illusion of impossibility. Although the scientific study of magic is in its infancy, the advent of recent tracking algorithms based on deep learning allow now to quantify the skills of the magician in naturalistic conditions at unprecedented resolution and robustness. In this study, we deconstructed stage magic into purely motor maneuvers and trained an artificial neural network (DeepLabCut) to follow coins as a professional magician made them appear and disappear in a series of tricks. Rather than using AI as a mere tracking tool, we conceived it as an "artificial spectator". When the coins were not visible, the algorithm was trained to infer their location as a human spectator would (i.e. in the left fist). This created situations where the human was fooled while AI (as seen by a human) was not, and vice versa. Magic from the perspective of the machine reveals our own cognitive biases.


  Access Model/Code and Paper
Formal Methods with a Touch of Magic

May 25, 2020
Parand Alizadeh Alamdari, Guy Avni, Thomas A. Henzinger, Anna Lukina

Machine learning and formal methods have complimentary benefits and drawbacks. In this work, we address the controller-design problem with a combination of techniques from both fields. The use of black-box neural networks in deep reinforcement learning (deep RL) poses a challenge for such a combination. Instead of reasoning formally about the output of deep RL, which we call the {\em wizard}, we extract from it a decision-tree based model, which we refer to as the {\em magic book}. Using the extracted model as an intermediary, we are able to handle problems that are infeasible for either deep RL or formal methods by themselves. First, we suggest, for the first time, combining a magic book in a synthesis procedure. We synthesize a stand-alone correct-by-design controller that enjoys the favorable performance of RL. Second, we incorporate a magic book in a bounded model checking (BMC) procedure. BMC allows us to find numerous traces of the plant under the control of the wizard, which a user can use to increase the trustworthiness of the wizard and direct further training.


  Access Model/Code and Paper
Enhancing magic sets with an application to ontological reasoning

Jul 19, 2019
Mario Alviano, Nicola Leone, Pierfrancesco Veltri, Jessica Zangari

Magic sets are a Datalog to Datalog rewriting technique to optimize query answering. The rewritten program focuses on a portion of the stable model(s) of the input program which is sufficient to answer the given query. However, the rewriting may introduce new recursive definitions, which can involve even negation and aggregations, and may slow down program evaluation. This paper enhances the magic set technique by preventing the creation of (new) recursive definitions in the rewritten program. It turns out that the new version of magic sets is closed for Datalog programs with stratified negation and aggregations, which is very convenient to obtain efficient computation of the stable model of the rewritten program. Moreover, the rewritten program is further optimized by the elimination of subsumed rules and by the efficient handling of the cases where binding propagation is lost. The research was stimulated by a challenge on the exploitation of Datalog/\textsc{dlv} for efficient reasoning on large ontologies. All proposed techniques have been hence implemented in the \textsc{dlv} system, and tested for ontological reasoning, confirming their effectiveness. Under consideration for publication in Theory and Practice of Logic Programming.

* Paper presented at the 35th International Conference on Logic Programming (ICLP 2019), Las Cruces, New Mexico, USA, 20-25 September 2019, 16 pages 

  Access Model/Code and Paper
Magic for Filter Optimization in Dynamic Bottom-up Processing

Apr 29, 1996
Guido Minnen

Off-line compilation of logic grammars using Magic allows an incorporation of filtering into the logic underlying the grammar. The explicit definite clause characterization of filtering resulting from Magic compilation allows processor independent and logically clean optimizations of dynamic bottom-up processing with respect to goal-directedness. Two filter optimizations based on the program transformation technique of Unfolding are discussed which are of practical and theoretical interest.

* Proceedings of ACL 96, Santa Cruz, USA, June 23-28 
* 8 pages LaTeX (uses aclap.sty) 

  Access Model/Code and Paper
A Machine Learning Imaging Core using Separable FIR-IIR Filters

Jan 02, 2020
Masayoshi Asama, Leo F. Isikdogan, Sushma Rao, Bhavin V. Nayak, Gilad Michael

We propose fixed-function neural network hardware that is designed to perform pixel-to-pixel image transformations in a highly efficient way. We use a fully trainable, fixed-topology neural network to build a model that can perform a wide variety of image processing tasks. Our model uses compressed skip lines and hybrid FIR-IIR blocks to reduce the latency and hardware footprint. Our proposed Machine Learning Imaging Core, dubbed MagIC, uses a silicon area of ~3mm^2 (in TSMC 16nm), which is orders of magnitude smaller than a comparable pixel-wise dense prediction model. MagIC requires no DDR bandwidth, no SRAM, and practically no external memory. Each MagIC core consumes 56mW (215 mW max power) at 500MHz and achieves an energy-efficient throughput of 23TOPS/W/mm^2. MagIC can be used as a multi-purpose image processing block in an imaging pipeline, approximating compute-heavy image processing applications, such as image deblurring, denoising, and colorization, within the power and silicon area limits of mobile devices.


  Access Model/Code and Paper
MAGIC: Multi-scale Heterogeneity Analysis and Clustering for Brain Diseases

Jul 01, 2020
Junhao Wen, Erdem Varol, Ganesh Chand, Aristeidis Sotiras, Christos Davatzikos

There is a growing amount of clinical, anatomical and functional evidence for the heterogeneous presentation of neuropsychiatric and neurodegenerative diseases such as schizophrenia and Alzheimers Disease (AD). Elucidating distinct subtypes of diseases allows a better understanding of neuropathogenesis and enables the possibility of developing targeted treatment programs. Recent semi-supervised clustering techniques have provided a data-driven way to understand disease heterogeneity. However, existing methods do not take into account that subtypes of the disease might present themselves at different spatial scales across the brain. Here, we introduce a novel method, MAGIC, to uncover disease heterogeneity by leveraging multi-scale clustering. We first extract multi-scale patterns of structural covariance (PSCs) followed by a semi-supervised clustering with double cyclic block-wise optimization across different scales of PSCs. We validate MAGIC using simulated heterogeneous neuroanatomical data and demonstrate its clinical potential by exploring the heterogeneity of AD using T1 MRI scans of 228 cognitively normal (CN) and 191 patients. Our results indicate two main subtypes of AD with distinct atrophy patterns that consist of both fine-scale atrophy in the hippocampus as well as large-scale atrophy in cortical regions. The evidence for the heterogeneity is further corroborated by the clinical evaluation of two subtypes, which indicates that there is a subpopulation of AD patients that tend to be younger and decline faster in cognitive performance relative to the other subpopulation, which tends to be older and maintains a relatively steady decline in cognitive abilities.

* 11 pages, 3 figures, accepted by MICCAI2020 

  Access Model/Code and Paper
How Insight Emerges in a Distributed, Content-addressable Memory

Jun 18, 2011
Liane Gabora, Apara Ranjan

We begin this chapter with the bold claim that it provides a neuroscientific explanation of the magic of creativity. Creativity presents a formidable challenge for neuroscience. Neuroscience generally involves studying what happens in the brain when someone engages in a task that involves responding to a stimulus, or retrieving information from memory and using it the right way, or at the right time. If the relevant information is not already encoded in memory, the task generally requires that the individual make systematic use of information that is encoded in memory. But creativity is different. It paradoxically involves studying how someone pulls out of their brain something that was never put into it! Moreover, it must be something both new and useful, or appropriate to the task at hand. The ability to pull out of memory something new and appropriate that was never stored there in the first place is what we refer to as the magic of creativity. Even if we are so fortunate as to determine which areas of the brain are active and how these areas interact during creative thought, we will not have an answer to the question of how the brain comes up with solutions and artworks that are new and appropriate. On the other hand, since the representational capacity of neurons emerges at a level that is higher than that of the individual neurons themselves, the inner workings of neurons is too low a level to explain the magic of creativity. Thus we look to a level that is midway between gross brain regions and neurons. Since creativity generally involves combining concepts from different domains, or seeing old ideas from new perspectives, we focus our efforts on the neural mechanisms underlying the representation of concepts and ideas. Thus we ask questions about the brain at the level that accounts for its representational capacity, i.e. at the level of distributed aggregates of neurons.

* Gabora, L. & Ranjan, A. (2012). How insight emerges in a distributed, content-addressable memory. In A. Bristol, O. Vartanian, & J. Kaufman (Eds.) The Neuroscience of Creativity. New York: Oxford University Press 
* in press; 17 pages; 2 figures 

  Access Model/Code and Paper
A third level trigger programmable on FPGA for the gamma/hadron separation in a Cherenkov telescope using pseudo-Zernike moments and the SVM classifier

Feb 24, 2006
Marco Frailis, Oriana Mansutti, Praveen Boinee, Giuseppe Cabras, Alessandro De Angelis, Barbara De Lotto, Alberto Forti, Mauro Dell'Orso, Riccardo Paoletti, Angelo Scribano, Nicola Turini, Mose' Mariotti, Luigi Peruzzo, Antonio Saggion

We studied the application of the Pseudo-Zernike features as image parameters (instead of the Hillas parameters) for the discrimination between the images produced by atmospheric electromagnetic showers caused by gamma-rays and the ones produced by atmospheric electromagnetic showers caused by hadrons in the MAGIC Experiment. We used a Support Vector Machine as classification algorithm with the computed Pseudo-Zernike features as classification parameters. We implemented on a FPGA board a kernel function of the SVM and the Pseudo-Zernike features to build a third level trigger for the gamma-hadron separation task of the MAGIC Experiment.


  Access Model/Code and Paper
Goal-Driven Query Answering for Existential Rules with Equality

Nov 20, 2017
Michael Benedikt, Boris Motik, Efthymia Tsamoura

Inspired by the magic sets for Datalog, we present a novel goal-driven approach for answering queries over terminating existential rules with equality (aka TGDs and EGDs). Our technique improves the performance of query answering by pruning the consequences that are not relevant for the query. This is challenging in our setting because equalities can potentially affect all predicates in a dataset. We address this problem by combining the existing singularization technique with two new ingredients: an algorithm for identifying the rules relevant to a query and a new magic sets algorithm. We show empirically that our technique can significantly improve the performance of query answering, and that it can mean the difference between answering a query in a few seconds or not being able to process the query at all.


  Access Model/Code and Paper
Ontology of Card Sleights

Mar 20, 2019
Aaron Sterling

We present a machine-readable movement writing for sleight-of-hand moves with cards -- a "Labanotation of card magic." This scheme of movement writing contains 440 categories of motion, and appears to taxonomize all card sleights that have appeared in over 1500 publications. The movement writing is axiomatized in $\mathcal{SROIQ}$(D) Description Logic, and collected formally as an Ontology of Card Sleights, a computational ontology that extends the Basic Formal Ontology and the Information Artifact Ontology. The Ontology of Card Sleights is implemented in OWL DL, a Description Logic fragment of the Web Ontology Language. While ontologies have historically been used to classify at a less granular level, the algorithmic nature of card tricks allows us to transcribe a performer's actions step by step. We conclude by discussing design criteria we have used to ensure the ontology can be accessed and modified with a simple click-and-drag interface. This may allow database searches and performance transcriptions by users with card magic knowledge, but no ontology background.

* IEEE 14th International Conference on Semantic Computing (ICSC), February 2019, pp. 263-270 
* 8 pages. Preprint. Final version appeared in ICSC 2019. Copyright of final version is held by IEEE 

  Access Model/Code and Paper
Mechanisms of Artistic Creativity in Deep Learning Neural Networks

Jun 30, 2019
Lonce Wyse

The generative capabilities of deep learning neural networks (DNNs) have been attracting increasing attention for both the remarkable artifacts they produce, but also because of the vast conceptual difference between how they are programmed and what they do. DNNs are 'black boxes' where high-level behavior is not explicitly programmed, but emerges from the complex interactions of thousands or millions of simple computational elements. Their behavior is often described in anthropomorphic terms that can be misleading, seem magical, or stoke fears of an imminent singularity in which machines become 'more' than human. In this paper, we examine 5 distinct behavioral characteristics associated with creativity, and provide an example of a mechanisms from generative deep learning architectures that give rise to each these characteristics. All 5 emerge from machinery built for purposes other than the creative characteristics they exhibit, mostly classification. These mechanisms of creative generative capabilities thus demonstrate a deep kinship to computational perceptual processes. By understanding how these different behaviors arise, we hope to on one hand take the magic out of anthropomorphic descriptions, but on the other, to build a deeper appreciation of machinic forms of creativity on their own terms that will allow us to nurture their further development.

* 8 pages, International Conference on Computational Creativity, Charlotte, NC, USA. June, 2019 

  Access Model/Code and Paper
Extending Weakly-Sticky Datalog+/-: Query-Answering Tractability and Optimizations

Jul 10, 2016
Mostafa Milani, Leopoldo Bertossi

Weakly-sticky (WS) Datalog+/- is an expressive member of the family of Datalog+/- programs that is based on the syntactic notions of stickiness and weak-acyclicity. Query answering over the WS programs has been investigated, but there is still much work to do on the design and implementation of practical query answering (QA) algorithms and their optimizations. Here, we study sticky and WS programs from the point of view of the behavior of the chase procedure, extending the stickiness property of the chase to that of generalized stickiness of the chase (gsch-property). With this property we specify the semantic class of GSCh programs, which includes sticky and WS programs, and other syntactic subclasses that we identify. In particular, we introduce joint-weakly-sticky (JWS) programs, that include WS programs. We also propose a bottom-up QA algorithm for a range of subclasses of GSCh. The algorithm runs in polynomial time (in data) for JWS programs. Unlike the WS class, JWS is closed under a general magic-sets rewriting procedure for the optimization of programs with existential rules. We apply the magic-sets rewriting in combination with the proposed QA algorithm for the optimization of QA over JWS programs.

* Extended version of RR'16 paper 

  Access Model/Code and Paper
Selective Magic HPSG Parsing

Jul 08, 1999
Guido Minnen

We propose a parser for constraint-logic grammars implementing HPSG that combines the advantages of dynamic bottom-up and advanced top-down control. The parser allows the user to apply magic compilation to specific constraints in a grammar which as a result can be processed dynamically in a bottom-up and goal-directed fashion. State of the art top-down processing techniques are used to deal with the remaining constraints. We discuss various aspects concerning the implementation of the parser as part of a grammar development system.

* Proceedings of EACL99, Bergen, Norway, June 8-11 
* 9 pages, LaTeX with 4 postscript figures (uses avm.sty, eaclap.sty and psfig-scale.sty) 

  Access Model/Code and Paper
Magic Sets for Disjunctive Datalog Programs

Apr 27, 2012
Mario Alviano, Wolfgang Faber, Gianluigi Greco, Nicola Leone

In this paper, a new technique for the optimization of (partially) bound queries over disjunctive Datalog programs with stratified negation is presented. The technique exploits the propagation of query bindings and extends the Magic Set (MS) optimization technique. An important feature of disjunctive Datalog is nonmonotonicity, which calls for nondeterministic implementations, such as backtracking search. A distinguishing characteristic of the new method is that the optimization can be exploited also during the nondeterministic phase. In particular, after some assumptions have been made during the computation, parts of the program may become irrelevant to a query under these assumptions. This allows for dynamic pruning of the search space. In contrast, the effect of the previously defined MS methods for disjunctive Datalog is limited to the deterministic portion of the process. In this way, the potential performance gain by using the proposed method can be exponential, as could be observed empirically. The correctness of MS is established thanks to a strong relationship between MS and unfounded sets that has not been studied in the literature before. This knowledge allows for extending the method also to programs with stratified negation in a natural way. The proposed method has been implemented in DLV and various experiments have been conducted. Experimental results on synthetic data confirm the utility of MS for disjunctive Datalog, and they highlight the computational gain that may be obtained by the new method w.r.t. the previously proposed MS methods for disjunctive Datalog programs. Further experiments on real-world data show the benefits of MS within an application scenario that has received considerable attention in recent years, the problem of answering user queries over possibly inconsistent databases originating from integration of autonomous sources of information.

* 67 pages, 19 figures, preprint submitted to Artificial Intelligence 

  Access Model/Code and Paper
Neural Networks Models for Analyzing Magic: the Gathering Cards

Oct 08, 2018
Felipe Zilio, Marcelo Prates, Luis Lamb

Historically, games of all kinds have often been the subject of study in scientific works of Computer Science, including the field of machine learning. By using machine learning techniques and applying them to a game with defined rules or a structured dataset, it's possible to learn and improve on the already existing techniques and methods to tackle new challenges and solve problems that are out of the ordinary. The already existing work on card games tends to focus on gameplay and card mechanics. This work aims to apply neural networks models, including Convolutional Neural Networks and Recurrent Neural Networks, in order to analyze Magic: the Gathering cards, both in terms of card text and illustrations; the card images and texts are used to train the networks in order to be able to classify them into multiple categories. The ultimate goal was to develop a methodology that could generate card text matching it to an input image, which was attained by relating the prediction values of the images and generated text across the different categories.

* 10 pages, 1 figure, 9 tables. Accepted at ICONIP 2018 

  Access Model/Code and Paper
Ridge Regularizaton: an Essential Concept in Data Science

May 30, 2020
Trevor Hastie

Ridge or more formally $\ell_2$ regularization shows up in many areas of statistics and machine learning. It is one of those essential devices that any good data scientist needs to master for their craft. In this brief ridge fest I have collected together some of the magic and beauty of ridge that my colleagues and I have encountered over the past 40 years in applied statistics.

* 17 pages, 5 figures. This paper was invited by Technometrics to appear in a special section to celebrate the 50th anniversary of the 1970 original ridge paper by Hoerl and Kennard 

  Access Model/Code and Paper
Large-scale Ontological Reasoning via Datalog

Mar 21, 2020
Mario Alviano, Marco Manna

Reasoning over OWL 2 is a very expensive task in general, and therefore the W3C identified tractable profiles exhibiting good computational properties. Ontological reasoning for many fragments of OWL 2 can be reduced to the evaluation of Datalog queries. This paper surveys some of these compilations, and in particular the one addressing queries over Horn-$\mathcal{SHIQ}$ knowledge bases and its implementation in DLV2 enanched by a new version of the Magic Sets algorithm.

* 15 pages, 2 tables, 1 figure, 2 algorithms, under review for the book Studies on the Semantic Web Series 

  Access Model/Code and Paper
Self-Organising Networks for Classification: developing Applications to Science Analysis for Astroparticle Physics

Feb 09, 2004
A. De Angelis, P. Boinee, M. Frailis, E. Milotti

Physics analysis in astroparticle experiments requires the capability of recognizing new phenomena; in order to establish what is new, it is important to develop tools for automatic classification, able to compare the final result with data from different detectors. A typical example is the problem of Gamma Ray Burst detection, classification, and possible association to known sources: for this task physicists will need in the next years tools to associate data from optical databases, from satellite experiments (EGRET, GLAST), and from Cherenkov telescopes (MAGIC, HESS, CANGAROO, VERITAS).


  Access Model/Code and Paper
Multidimensional data classification with artificial neural networks

Dec 06, 2004
P. Boinee, F. Barbarino, A. De Angelis

Multi-dimensional data classification is an important and challenging problem in many astro-particle experiments. Neural networks have proved to be versatile and robust in multi-dimensional data classification. In this article we shall study the classification of gamma from the hadrons for the MAGIC Experiment. Two neural networks have been used for the classification task. One is Multi-Layer Perceptron based on supervised learning and other is Self-Organising Map (SOM), which is based on unsupervised learning technique. The results have been shown and the possible ways of combining these networks have been proposed to yield better and faster classification results.

* 8 pages, 4 figures, Submitted to EURASIP Journal on Applied Signal Processing, 2004 

  Access Model/Code and Paper